Speech synthesis by mimicking articulatory movements

Masaaki Honda*, Tokihiko Kaburagi, Takeshi Okadome

*この研究の対応する著者

研究成果: Conference article査読

5 被引用数 (Scopus)

抄録

We describe a computational model of speech production which consists of trajectory formation, for generating articulatory movements from a phoneme specific gesture, and articulatory-to-acoustic mapping for generating speech signal from the articulatory motion. The context-dependent and independent approaches in the task-oriented trajectory formation are presented form a viewpoint how to cope with the contextual variability in the articulatory movements. The model is evaluated by comparing the computed and the original articulatory trajectories and speech acoustics. Also, we describe a recovery of the articulatory motion from speech acoustics for generating the articulatory movements by mimicking speech acoustics.

本文言語English
ページ(範囲)II-463 - II-468
ジャーナルProceedings of the IEEE International Conference on Systems, Man and Cybernetics
2
出版ステータスPublished - 1999
外部発表はい
イベント1999 IEEE International Conference on Systems, Man, and Cybernetics 'Human Communication and Cybernetics' - Tokyo, Jpn
継続期間: 1999 10月 121999 10月 15

ASJC Scopus subject areas

  • 制御およびシステム工学
  • ハードウェアとアーキテクチャ

フィンガープリント

「Speech synthesis by mimicking articulatory movements」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル