A Media Conversion from Speech to Facial Image for Intelligent Man-Machine Interface

研究成果: Article査読

75 被引用数 (Scopus)


An automatic facial motion image synthesis scheme, driven by speech, and a real-time image synthesis design are presented. The purpose of this research is to realize an “intelligent” human-machine interface or “intelligent” communication system with talking head images. A human face is reconstructed on the display of a terminal using a 3-D surface model and texture mapping technique. Facial motion images are synthesized naturally by transformation of the lattice points on 3-D wire frames. Two driving motion methods, a text-to-image conversion scheme, and a voice-to-image conversion scheme are proposed in this paper. In the first method, the synthesized head image can appear to speak some given words and phrases naturally. In the latter case, some mouth and jaw motions can be synthesized in synchronization with voice signals from a speaker. Facial expressions, other than mouth shape and jaw position, also can be added at any moment, so it is easy to make the facial model appear angry, to smile, to appear sad, etc., by special modification rules. These schemes were implemented on a parallel image computer system. A real-time image synthesizer was able to generate facial motion images on the display, at a TV image video rate.

ジャーナルIEEE Journal on Selected Areas in Communications
出版ステータスPublished - 1991 5月

ASJC Scopus subject areas

  • コンピュータ ネットワークおよび通信
  • 電子工学および電気工学


「A Media Conversion from Speech to Facial Image for Intelligent Man-Machine Interface」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。