Abstract
Speech is produced by articulating speech organs such as the jaw, tongue, and lips. We have developed an articulatory-based speech synthesis model that converts a phoneme string into a continuous acoustic signal by mimicing the human speech production process. This paper describes a computational model of the speech production process which involves a motor process to generate articulatory movements from the motor task sequence and articulatory-to-acoustic mapping to determine the vocal-tract acoustic characteristics. † NTT Basic Research Laboratories.
Original language | English |
---|---|
Pages (from-to) | 399-404 |
Number of pages | 6 |
Journal | NTT R and D |
Volume | 47 |
Issue number | 4 |
Publication status | Published - 1998 |
Externally published | Yes |
ASJC Scopus subject areas
- Electrical and Electronic Engineering