Abstract
We present a method that determines articulatory movements from speech acoustics using an HMM (Hidden Markov Model)-based speech production model. The model statistically generates speech acoustics and articulatory movements from a given phonemic string. It consists of HMMs of articulatory movements for each phoneme and an articulatory-to-acoustic mapping for each HMM state. For a given speech acoustics, the maximum a posteriori probability estimate of the articulatory parameters of the statistical model is presented. The method's performance on sentences was evaluated by comparing the estimated articulatory parameters with observed parameters. The average rms error of the estimated articulatory parameters was 1.79 mm with phonemic information and 2.16 mm without phonemic information in an utterance.
Original language | English |
---|---|
Pages | 2305-2308 |
Number of pages | 4 |
Publication status | Published - 2002 |
Externally published | Yes |
Event | 7th International Conference on Spoken Language Processing, ICSLP 2002 - Denver, United States Duration: 2002 Sept 16 → 2002 Sept 20 |
Other
Other | 7th International Conference on Spoken Language Processing, ICSLP 2002 |
---|---|
Country/Territory | United States |
City | Denver |
Period | 02/9/16 → 02/9/20 |
ASJC Scopus subject areas
- Language and Linguistics
- Linguistics and Language