Speech production model based on articulatory movements

Masaaki Honda*, Tokihiko Kaburagi

*この研究の対応する著者

研究成果: Article査読

抄録

Speech is generated by articulating speech organs such as jaw, tongue, and lips according to their motor commands. We have developed speech production model that converts a phoneme-specific motor task sequence into a continuous acoustic signal. This paper describes a computational model of the speech production process which involves a motor process to generate articulatory movements from the motor task sequences and an aero-acoustic process in the vocal tract to produce speech signals. Simulation results on continuous speech production show that this model can accurately predict the actual articulatory movements and generate natural-sounding speech.

本文言語English
ページ(範囲)87-92
ページ数6
ジャーナルNTT R and D
44
1
出版ステータスPublished - 1995
外部発表はい

ASJC Scopus subject areas

  • 電子工学および電気工学

フィンガープリント

「Speech production model based on articulatory movements」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル