Speech synthesis: Difference between revisions
(7 intermediate revisions by 6 users not shown) | |||
Line 17: | Line 17: | ||
*'''Listen and Spell''' - Students can listen to the XO speak a word. They must then spell the word and see if they did so correctly. This can be scaled up to a multiplayer game where students can challenge other students in their area. edit: Check out wiki.laptop.org/go/talkntype for beginning work in this area. |
*'''Listen and Spell''' - Students can listen to the XO speak a word. They must then spell the word and see if they did so correctly. This can be scaled up to a multiplayer game where students can challenge other students in their area. edit: Check out wiki.laptop.org/go/talkntype for beginning work in this area. |
||
*'''Talking Chatbots''' - Kids would love to shoot questions to an AI chatbot and hear it answer |
*'''Talking [[Chatbots]]''' - Kids would love to shoot questions to an AI [[chatbot]] and hear it answer |
||
*'''Accessibility''' - Speech Synthesis tools are an integral component of software meant to improve accessibility. See http://live.gnome.org/Orca Orca] for more info. |
*'''Accessibility''' - Speech Synthesis tools are an integral component of software meant to improve accessibility. See http://live.gnome.org/Orca Orca] for more info. |
||
Line 81: | Line 81: | ||
$ flite -t 'Hello, world!' |
$ flite -t 'Hello, world!' |
||
:Does it always sound this bad, or is just the default voice that works poorly? [[User:MitchellNCharity|MitchellNCharity]] 16:42, 22 October 2007 (EDT) |
:Does it always sound this bad, or is just the default voice that works poorly? [[User:MitchellNCharity|MitchellNCharity]] 16:42, 22 October 2007 (EDT) |
||
:The default voice isn't great. The arctic-hts voices are much better but also qite large (2-3MB each) and not lightweight on the CPU either. [[User:Mattdm|Mattdm]] 01:25, 4 October 2008 (UTC) |
|||
Festival is ''not'' currently included on the xo. Unless that changes, it would have to come out of your activity's space budget. |
Festival is ''not'' currently included on the xo. Unless that changes, it would have to come out of your activity's space budget. |
||
Line 88: | Line 88: | ||
$ echo 'Hello, world!' | festival --tts |
$ echo 'Hello, world!' | festival --tts |
||
=== [[FreeIconToSpeech]] === |
|||
===FreeIconTospeech, under development=== |
|||
====Overview==== |
|||
⚫ | The goal of FreeIconTospeech is to provide a low-cost assistive / augmentative communication tool for people with speech, motor, and/or developmental challenges. |
||
⚫ | The goal of FreeIconTospeech is to provide a low-cost assistive / augmentative communication tool for people with speech, motor, and/or developmental challenges. The immediate opportunity is to create open source software to allow a user to select concepts through a menu of icons, and synthesize speech from those selected concepts. See the [[FreeIconToSpeech]] page for more information. |
||
Existing tools in use for this purpose are priced in the thousands of dollars per device, and proprietary. |
|||
The OLPC XO platform, while not having a touch screen, is priced in the hundreds of dollars per device, and already contains many of the base components needed (evident in the text-to-speech synthesis activity [[Speak]]). |
|||
The OLPC icon-to-speech need has been expressed by many people independently, including discussions at [[Talk:Speak#Accessibility]] and [[Talk:Accessibility#Augmentative_and_Alternative_Communication]]. |
|||
It appears that a proof of concept could be developed with a small time investment, and potential users are ready to test as soon as this is complete. |
|||
A prototype has been posted at [[Ispeak_(activity)]]: [[Image:Ispeak-1.xo]] This version runs at full speed on the XO. |
|||
====User Interface Design==== |
|||
Initial discussions suggest a user interface which allows users to navigate a hierarchy of basic concepts, allowing some variability of detail / zoom, due to the variability of users' motor skills used to select concepts. |
|||
3 levels of hierarchy at 7 +/-2 groups/concepts per level would allow selection among hundreds of concepts, which appears to be a useful balance between richness of expression and speed of selection. |
|||
Display and navigation of the hierarchy can be a combination of existing concentric & zoomable menu approaches: |
|||
*Zoomable UI http://www.cs.umd.edu/hcil/pad++/sitemap/ |
|||
*Dasher http://www.inference.phy.cam.ac.uk/dasher/DasherSummary2.html |
|||
*Fractal:Edge http://fractalmaps.com |
|||
We envision three such navigation areas, displayed from left to right across the screen, for the selection of a subject, a verb, and an object of a basic sentence, with no attempt at grammatical accuracy. |
|||
====Conceptual Content==== |
|||
The concept hierarchy can be synthesized from a careful blend of existing taxonomies. For an initial proof of concept, two useful taxonomies are from sign language and the food pyramid. Use of sign language extends all the way to toddlers, as an increasingly popular supplemental communication before they develop speech abilities, such as the "Sign With Your Baby" materials. 100 basic signs provide some of the most useful concepts for basic living: http://www.lifeprint.com/asl101/pages-layout/concepts.htm . Sign language may be doubly useful in some cases, when motor skills allow for communication with the manual signs. Icon libraries are already established for American Sign Language, and readily available for many of the USDA food pyramid categories: http://openclipart.org/media/tags/vegetable . |
|||
Developing appropriate and free and open source icons for this project is a challenge that the community/wiki could take on. Many users of Augmentative and Alternative Communication devices face visual, perceptual, and cognitive challenges. Therefore, icons should be as uncomplicated and transparent as possible. Examples: |
|||
Mayer-Johnson symbols are widely used in American schools because the stick drawings are easily scalable and widely considered the most transparent for more abstract ideas. They are less concrete than pictures, however, which might pose a problem for early learners. They are also very heavily copyright protected, which does not coincide with OLPC's software freedom standard. |
|||
www.mayer-johnson.com |
|||
Prentke Romich's symbols, also proprietary. support everything from early learning up to sophisticated semantic encoding to increase rate of messages. (i.e. swimming pool icon + color icon = blue or swimming pool + activity icon = swim) |
|||
The Tango! by Blink Twice also has a unique encoding system for early learners. |
|||
My points are: |
|||
1) a large scale Free and Open Source icon library probably needs to be developed. |
|||
2) the function of the device also should be considered. For young children and many people with autism and other related conditions,requesting is the first skill worked on -- asking for food/drink -controlling the other's actions to get needs met. For them, pages consisting of simple "I want" then branches to many different food items would be an idea setup. |
|||
<br> |
|||
Other functions of communication include building social closeness with close circle of people, transferring information to others, and participating in social interactions with community ("how are you" / "excuse me" etc.). Each of these functions varies in terms of the importance of the specific content of each message,the importance of the semantics of the message, and whether the communicator will be familiar or unfamiliar (a mom will be able to "read" a nonverbal child's gestures but a police officer might not) The device and page set ups should keep these situations in mind and design accordingly. |
|||
<br> |
|||
(source?)<br> |
|||
It's always been my dream to make the XO into a sophisticated communication device. I've seen families spend thousands on devices that do not meet their children's needs and I would love to be involved with the project any way that I can. |
|||
<br>Lesley,br. |
|||
01:32, 20 April 2008 (EDT) |
|||
====Additional Enhancements and Uses==== |
|||
* Input devices: |
|||
** larger control surface with external USB trackpad / xpad (such as Wacom, <$100) |
|||
** touch panel (driver installation procedure under development) ~$140 E08 http://www.irtouchusa.com/e_pro_list.htm |
|||
** Johnny Lee's head tracking from $40 Wii Remote http://www.youtube.com/watch?v=Jd3-eiid-Uw |
|||
** head or eye motion driven pointing devices - USB? $? http://www.olpcaustria.org/mediawiki/index.php/Headtracker |
|||
** two switch step scanning http://alltogether.wordpress.com/2008/04/03/iconspeak-for-the-xo |
|||
* Additional languages & culturally-relevant icons |
|||
** scalability needed for this, in terms of ontology & GUI |
|||
** vectorize artwork - consider method used by www.CopyArtwork.com |
|||
* Add to & change the vocabulary & icons with photos, utilizing the built-in OLPC XO camera. |
|||
* Run on smaller devices, such as mobile phones, music players, and PDAs with adequate speaker output. |
|||
* Ability to operate with more grammatical correctness for more formal situations such as public and educational settings. |
|||
* Teaching of reading & writing in native language. |
|||
* Teaching of second or foreign languages. |
|||
* Selectable foreign language or culture for speech output, enabling basic communication across languages or cultures. |
|||
* Recording the selections as near-ontological content warrants further discussion. |
|||
** could record these in the Journal |
|||
====User Interface mock-up, as a slide presentation==== |
|||
Open the slide presentation file: http://wiki.laptop.org/images/e/ec/FreeIconToSpeech_UI_text_demo_02.ppt . |
|||
[Work in progress: Icons are not drawn into this diagram yet. So for the moment, imagine that each word in black is replaced by an icon representing that concept.] |
|||
Click "people", "mom", "create", "cook", "food", and "beans", imagining the interface zooming in to where your pointer travels, for easier selectability. |
|||
Then the computer would consider your selections complete, and speak them. |
|||
A presentation on an alternate interface: http://wiki.laptop.org/go/Image:FreeIconToSpeech_Alternative_User_Interface.ppt |
|||
Thanks for ideas contributed & discussed at PyCon 2008 by Tony Anderson, Lisa Beal, Annie Barkau, Ed Cherlin, & Mel Chua. |
|||
- [[RMattB]] 2008 03 17 |
|||
Please add your thoughts. :) |
|||
==The state of the art== |
==The state of the art== |
||
Line 182: | Line 103: | ||
*[[Shtooka Project]] |
*[[Shtooka Project]] |
||
*[[Speak]] A simple but cute activity which animates a face as it reads the words typed by the child |
*[[Speak]] A simple but cute activity which animates a face as it reads the words typed by the child |
||
*[[Words]] A translating dictionary with speech synthesis |
|||
*[[Talkntype]] Initial draft of an activity based on the Speak&Spell toy, using eSpeak speech synthesis. |
*[[Talkntype]] Initial draft of an activity based on the Speak&Spell toy, using eSpeak speech synthesis. |
||
*[http://code.google.com/soc/2008/clam/appinfo.html?csaid=AE2EEC2E19810C2 GSOC08 Educational Vowel Synthesiser] |
*[http://code.google.com/soc/2008/clam/appinfo.html?csaid=AE2EEC2E19810C2 GSOC08 Educational Vowel Synthesiser] |
||
Line 189: | Line 111: | ||
[[Category:Accessibility]] |
[[Category:Accessibility]] |
||
[[Category:Speech Synthesis]] |
[[Category:Speech Synthesis]] |
||
[[Category:Chatbot]] |
|||
[[Category:Chatbots]] |
|||
[[Category:Virtual Assistant]] |
|||
[[Category:Virtual Assistants]] |
Latest revision as of 20:20, 12 April 2012
Scope
This article is for collecting ideas and resources for using text-to-speech (TTS) speech synthesis on the XO.
Applications of Speech Synthesis wrt OLPC
Speech synthesis will not only be useful in improving the accessibility of the laptop but also for providing learning aides to the student.
Some simple educational activities that would benefit from the speech synthesis project include:
- Pronounce - An activity teaching the child how to pronounce words correctly. It can be scaled up in the future to use speech recognition/ analysis of audio files to take audio input from the student. Based upon analysis and comparisons of the input audio file the activity can suggest appropriate corrections in the way the child speaks.
- Story Book Reader - The Read activity can double up as an activity that would read stories that the child downloads on his/her XO. Children can be encouraged to read more and learn as much as they can. Learning through listening has its own advantages when compared to learning through reading and ad-hoc experimentation.
- Listen and Spell - Students can listen to the XO speak a word. They must then spell the word and see if they did so correctly. This can be scaled up to a multiplayer game where students can challenge other students in their area. edit: Check out wiki.laptop.org/go/talkntype for beginning work in this area.
- Accessibility - Speech Synthesis tools are an integral component of software meant to improve accessibility. See http://live.gnome.org/Orca Orca] for more info.
Also see the following article which is a good read for the present context: Effective Adult Literacy Program
Existing software
Speak
Type text, and a funny face speaks what you typed. Pitch, speed, glasses, and mouth are adjustable.
Others
There are FOSS Free Open Source Software Speech-Synthesis packages which run on devices comparable to the XO. We are much more concerned with localization than is typical. And dialects can be a political issue. But TTS would help with Accessibility. And could be very cool.
Speech synthesis has a set of complex tradoffs of synthesizer size versus fidelity versus effort to localize a new language. The Wikipedia speech synthesis article discusses software that is available, which includes festival, flite, and espeak.
Espeak is small enough for us to often bundle and covers quite a few languages: ~10 languages currently supported tuned by native speakers. Localization to ten more languages is underway.
Synthesis is essential for accessibility to content by people with vision problems, and will need to be integrated with the ATK library used, as well as literacy training, other uses as part of a GUI. Full localization therefore involves selection of a suitable synthesis system and integration into the ATK framework, along with localization of that system for the particular language involved.
Speech synthesis is usually not a good guide for pronunciation – but it may be better than a poor teacher who has never had the opportunity to learn from a native speaker of that language.
eSpeak
eSpeak is currently included on the xo. .. But does not work directly to the sound card since the XO uses ALSA instead of OSS as its main Sound System,and enabling OSS Emulation in ALSA is not yet the default. Manually configuring your XO to emulate OSS in ALSA will provide the system devices that you require and allow full espeak functionality - Dking
If you are lacking OSS Emulation on your XO's sound sytstem setup in ALSA, some text can be played by piping espeak's standard output to another file:
$ espeak --stdout "Ello world." | gst-launch fdsrc fd=0 ! wavparse ! alsasink $ espeak --stdout -vpt "Bem-vindo ao wiki da OLPC" | gst-launch fdsrc fd=0 ! wavparse ! alsasink $ espeak --stdout "Using aplay." | aplay -
However, for some initial sounds, espeak fails to output valid audio to standard out (Trac #4002) . This includes letters c, h, k, p, q, t, v, z and possibly others. For example, this still won't work in build 703 (aka Update.1, espeak v 1.28):
$ espeak --stdout "hello world." | aplay
A workaround is to first write the output to a file, then play back the file:
$ espeak -w temp.wav "hello world."; aplay temp.wav
Screen Reader is a DBus interface that allows the XO to use eSpeak via Python.
- http://espeak.sourceforge.net/languages.html
- http://sourceforge.net/forum/forum.php?thread_id=1679272&forum_id=538920 Improving the Brasilian portuguese voice.
Festival
- http://festvox.org/festival/ multi-lingual speech synthesis
- http://www.speech.cs.cmu.edu/flite/ Festival-lite is a small, fast run-time synthesis engine.
- http://festlang.berlios.de/ wiki
- http://festvox.org/ building of new synthetic voices
- http://tcts.fpms.ac.be/synthesis/mbrola.html The MBROLA Project - Towards a Freely Available Multilingual Speech Synthesizer
Flite is not currently included on the xo. Unless that changes, it would have to come out of your activity's space budget.
First, run /sbin/init 3 so yum doesn't run out of memory. After yum, reboot. $ yum install flite $ flite -t 'Hello, world!'
- Does it always sound this bad, or is just the default voice that works poorly? MitchellNCharity 16:42, 22 October 2007 (EDT)
- The default voice isn't great. The arctic-hts voices are much better but also qite large (2-3MB each) and not lightweight on the CPU either. Mattdm 01:25, 4 October 2008 (UTC)
Festival is not currently included on the xo. Unless that changes, it would have to come out of your activity's space budget.
First, run /sbin/init 3 so yum doesn't run out of memory. After yum, reboot. $ yum install festival $ echo 'Hello, world!' | festival --tts
FreeIconToSpeech
The goal of FreeIconTospeech is to provide a low-cost assistive / augmentative communication tool for people with speech, motor, and/or developmental challenges. The immediate opportunity is to create open source software to allow a user to select concepts through a menu of icons, and synthesize speech from those selected concepts. See the FreeIconToSpeech page for more information.
The state of the art
Commercial Text-To-Speech programs are getting very good now. The examples at the Digital Future Software Company site are very clear. They use AT&T technology and provide examples of Male and Female speech in English, French and Spanish. The XO needs open-source software that can approach this quality in a wide range of languages.--Ricardo 04:07, 17 August 2007 (EDT)
Resources
See also
- Screen Reader
- Speech recognition
- Shtooka Project
- Speak A simple but cute activity which animates a face as it reads the words typed by the child
- Words A translating dictionary with speech synthesis
- Talkntype Initial draft of an activity based on the Speak&Spell toy, using eSpeak speech synthesis.
- GSOC08 Educational Vowel Synthesiser