Expression analysis/synthesis system based on emotion space constructed by multilayered neural network

Nobuo Ueki*, Shigeo Morishima, Hiroshi Yamada, Hiroshi Harashima

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

19 Citations (Scopus)


To realize a user-friendly interface where a human and a computer can engage in face-to-face communication, the computer must be able to recognize the emotional state of the human by facial expressions, then synthesize and display a reasonable facial image in response. To describe this analysis and synthesis of facial expressions easily, the computer itself should have some kind of emotion model. By using a five-layered neural network which has generalization and superior nonlinear mapping performance, identity mapping training was performed using parameterized facial expressions. With respect to the space built in the middle layer of the five-layered neural network as an emotion model, emotion space was constructed. Based on this emotion space an attempt was made to build a system which can realize mappings from an expression to an emotion and from an emotion to an expression simultaneously. Moreover, to recognize a facial expression from the actual facial image of a human, the method of extracting the facial parameter from the facial points movement, is investigated.

Original languageEnglish
Pages (from-to)95-107
Number of pages13
JournalSystems and Computers in Japan
Issue number13
Publication statusPublished - 1994 Nov 1
Externally publishedYes

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Information Systems
  • Hardware and Architecture
  • Computational Theory and Mathematics


Dive into the research topics of 'Expression analysis/synthesis system based on emotion space constructed by multilayered neural network'. Together they form a unique fingerprint.

Cite this