Abstract
Speech can express subjective meanings and intents that, in order to be fully understood, rely heavily in its affective perception. Some Text-to-Speech (TTS) systems reveal weaknesses in their emotional expressivity but this situation can be improved by a better parametrization of the acoustic and prosodic parameters. This paper describes an approach for better emotional expressivity in a speech synthesizer. Our technique uses several linguistic resources that can recognize emotions in a text and assigns appropriate parameters to the synthesizer to carry out a suitable speech synthesis. For evaluation purposes we considered the MARY TTS system to readout "happy" and "sad" news. The preliminary perceptual test results are encouraging and human judges, by listening to the synthesized speech obtained with our approach, could perceive "happy" emotions much better than compared to when they listened nonaffective synthesized speech.
Original language | English |
---|---|
Title of host publication | Proceedings - 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009 |
DOIs | |
Publication status | Published - 2009 |
Externally published | Yes |
Event | 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009 - Amsterdam Duration: 2009 Sept 10 → 2009 Sept 12 |
Other
Other | 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009 |
---|---|
City | Amsterdam |
Period | 09/9/10 → 09/9/12 |
ASJC Scopus subject areas
- Artificial Intelligence
- Computer Vision and Pattern Recognition
- Human-Computer Interaction
- Software