A facial motion synthesis for intelligent man‐machine interface

Shigeo Morishima*, Shin'Ichi Okada, Hiroshi Harashima

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)


A facial motion image synthesis method for intelligent man‐machine interface is examined. Here, the intelligent man‐machine interface is a kind of friendly man‐machine interface with voices and pictures in which human faces appear on a screen and answer questions, compared to the currently existing user interfaces which primarily uses letters. Thus what appears on the screen is human faces, and if speech mannerisms and facial expressions are natural, then the interactions with the machine are similar to those with actual human beings. To implement such an intelligent man‐machine interface it is necessary to synthesize natural facial expressions on the screen. This paper investigates a method to synthesize facial motion images based on given information on text and emotion. The proposed method utilizes the analysis‐synthesis image coding method. It constructs facial images by assigning intensity data to the parameters of a 3‐dimensional (3‐D) model matching the person in question. Moreover, it synthesizes facial expressions by modifying the 3‐D model according to the predetermined set of rules based on the input phonemes and emotion, and also synthesizes reasonably natural facial images.

Original languageEnglish
Pages (from-to)50-59
Number of pages10
JournalSystems and Computers in Japan
Issue number5
Publication statusPublished - 1991
Externally publishedYes

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Information Systems
  • Hardware and Architecture
  • Computational Theory and Mathematics


Dive into the research topics of 'A facial motion synthesis for intelligent man‐machine interface'. Together they form a unique fingerprint.

Cite this