We present a method that determines articulatory movements from speech acoustics using an HMM (Hidden Markov Model)-based speech production model. The model statistically generates speech acoustics and articulatory movements from a given phonemic string. It consists of HMMs of articulatory movements for each phoneme and an articulatory-to-acoustic mapping for each HMM state. For a given speech acoustics, the maximum a posteriori probability estimate of the articulatory parameters of the statistical model is presented. The method's performance on sentences was evaluated by comparing the estimated articulatory parameters with observed parameters. The average rms error of the estimated articulatory parameters was 1.79 mm with phonemic information and 2.16 mm without phonemic information in an utterance.
|Published - 2002
|7th International Conference on Spoken Language Processing, ICSLP 2002 - Denver, United States
継続期間: 2002 9月 16 → 2002 9月 20
|7th International Conference on Spoken Language Processing, ICSLP 2002
|02/9/16 → 02/9/20
ASJC Scopus subject areas