Talk:Speak

From OLPC
Revision as of 05:51, 11 December 2008 by Pradosh (talk | contribs) (Not working with SCIM)
Jump to: navigation, search

suggestion

Nice activity which is a lot of fun for children.

While reading about artifical intelligence on

http://programmer-art.org/articles/pythonai

there was the idea to add a new mode which connects speak to a simple AI application like eliza or therapist. Opinions or volunteers ?


Some feedback

Clever!

I'd been wondering if the XO could do speech, and along comes Speak. Perfect. I'd like to use it for the kids' spelling tests: I give it a list, Speak says a word, kid types the word, Speak says right or wrong, and so on. Something to add to the "futures" list?

Another question for you, if you have the time. Is the speech library or API accessible from Pippy? I've only just started wading through the Developers section of the wiki, but I haven't yet come across any documentation on Pippy or speech, and Pippy is something I have a chance of handling.

Tom 2108 PST 12 January 2008

Tom Hannen is working on a talking spelling game called TalknType that you might want to look at. We've been talking about ways that these two activities could work together.

As for access from Pippy, that will likely be handled by the Screen_Reader API.

jminor 02:06, 14 January 2008 (EST)

Some feedback

Nice job! This is fun to play with and extremely cute. I love that the activity speaks the options as you change them. Some minor points:

  • The XEyes thing is great, but the distance between the two eyes means that for some screen positions one pupil is unrealistically off-level from the other. I'd recommend using the same y coordinate for both eyes; the average of the y coordinates that you are currently using should work.
  • It looks like you're updating the eyes on a timer. I suspect that it might be more CPU efficient to just forward the mouse motion events from the widget's parent, but I don't know enough about gtk to know if there are any obvious drawbacks to this approach.
  • The mouth doesn't close all the way at the end of a sentence.
  • It looks like you're already aware of the occasional stutters.
  • One mouth corner is drawn differently from the other; setting cairo to use rounded path joins and caps might look a little better. (now fixed in v4)

Joe 00:28, 10 January 2008 (EST)

Thanks for the ideas. I wasn't shooting for realistic eyes, just charming ones, but I do want to make more eye-styles, so maybe Googly and Regular could let you pick between the pupil motions that you describe.

As for the timer, I removed it and replaced it with mouse-motion callbacks in v3. Unfortunately I'm not getting mouse-motion events while dragging the sliders, which was a key feature that I liked. I'll try to re-enable that somehow. I also modified the mouth callback so that it doesn't draw when there is no sound playing. With nothing going on the CPU is nearly 0%.

I'll take a look at the mouth corners issue. I've never seen the mouth fail to close at the end of a sentence, but if I do then I'll try to fix it.

Thanks for the feedback!

jminor 13:38, 12 January 2008 (EST)

Some feedback

Fantastic Application! JoshSeal 15:31, 10 January 2008 (EST)

Some feedback

This is a great idea for a program! My five year old son and I had a lot of fun playing with Speak this morning. As we played my son enjoyed having me write a word on paper then he would type it in and listen to it being read. He immediately wanted to run and show others. We had to retype words to show my husband, and again to show his older brother, and again to show mom......It would be nice to have the word stick around, perhaps a pop-up that shows the last few words/phrases entered that could be reselected. This is a terrific learning to read activity. We also noticed that when entering very large numbers the voice did not read them correctly. Maybe when inputting large numbers the voice could say something like 'oops - I don't know numbers that big' or something to that effect. Thanks for creating this!

I'll see if I can wire up the up/down arrows to let you cycle back to old sentences. I also want the text to stay around while it is talking so you can compare the voice with what you typed. I'm not sure I can/should fix the large number issue at the activity level. I'll see if I can file a bug with espeak about that. jminor 13:38, 12 January 2008 (EST)

Speak v4 lets you use the arrow keys or the history popup to get back to the things you have typed previously. It also saves the history, voice and face settings in the Journal so you can resume right where you left off. jminor 02:47, 13 February 2008 (EST)

Accessibility

On alternative communication, I left a comment at Talk:Accessibility#Augmentative and Alternative Communication that some folks with more programming knowledge than I might be interested in looking at. --Neskaya 00:00, 13 February 2008 (EST)

That's a great idea. I have chatted with the folks making the Screen_Reader about having Speak use the speech dispatcher, which is meant to help folks who are blind. I hadn't thought much about folks with trouble typing, other than the obvious idea that children can learn (or at least be motivated) to learn to type by playing with Speak. jminor 00:47, 13 February 2008 (EST)

So ... my ideas were not so much really for those having trouble typing (though that too), but for those who have communicative difficulty, as Speak does after all, talk. My idea is based on my own usage from using Speak in situations where I'm not socially comfortable enough to talk (I have social (or as I don't like to call it, selective) mutism) which was in turn based off of seeing what a kid in the autism class I TA for did when I left the activity running for them to play with. At the moment, if we can change the list of things that you already typed to a(n editable) list of common phrases, I think it'd be a great start. --Neskaya 21:26, 14 February 2008 (EST)
Now that there is a saved history, you could type the phrases that you want once, save that in the Journal with the Keep button and then resume that entry each time you want to start with those phrases. Could you give that a try to see if it works the way you are hoping? Also, if the screen reader project gets some more momentum then you could achieve the same thing just by keeping phrases you want to use in a Write activity and using the screen reader to speak them as desired. That would give you lots of flexibility. jminor 00:37, 15 February 2008 (EST)
True, that does work, for now. The idea of keeping things in a Write activity doesn't really work, though. Alternative communication that is effective never ends up being that simple. I'm on IRC much of the time if you have further ideas on this (as IRC is easier for me to use then the wiki). --Neskaya 17:20, 15 February 2008 (EST)


Please see and comment on the OLPCIconSpeak project: Speech_synthesis#OLPCIconSpeak.2C_under_development Thanks!

Some feedback

Is it possible to feed Speak with a text file? My daughter, who is a few months shy of three, likes to listen to stories. I could feed in a story as a text file and let the character speak it out. Sverma 9:48, 7 July 2008 (PST)

Some feedback

It is NOT WORKING with newer joyride builds :-( Was OK under Joyride 2154 but not in 2271 (and something in between that I do not remember)

Not working with SCIM

The Speak activity (version 9 ) does not seem to work with SCIM. I can enter text in the text box in both English and Nepali but when I press enter, Speak does not speak. Pradosh