Speech synthesis by mimicking articulatory movements

Masaaki Honda*, Tokihiko Kaburagi, Takeshi Okadome

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

We describe a computational model of speech production which consists of trajectory formation, for generating articulatory movements from a phoneme specific gesture, and articulatory-to-acoustic mapping for generating speech signal from the articulatory motion. The context-dependent and independent approaches in the task-oriented trajectory formation are presented form a viewpoint how to cope with the contextual variability in the articulatory movements. The model is evaluated by comparing the computed and the original articulatory trajectories and speech acoustics. Also, we describe a recovery of the articulatory motion from speech acoustics for generating the articulatory movements by mimicking speech acoustics.

Original languageEnglish
Pages (from-to)II-463 - II-468
JournalProceedings of the IEEE International Conference on Systems, Man and Cybernetics
Volume2
Publication statusPublished - 1999 Dec 1
Externally publishedYes
Event1999 IEEE International Conference on Systems, Man, and Cybernetics 'Human Communication and Cybernetics' - Tokyo, Jpn
Duration: 1999 Oct 121999 Oct 15

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Speech synthesis by mimicking articulatory movements'. Together they form a unique fingerprint.

Cite this