Expression analysis/synthesis system based on emotion space constructed by multilayered neural network

Nobuo Ueki*, Shigeo Morishima, Hiroshi Yamada, Hiroshi Harashima

*この研究の対応する著者

研究成果: Article査読

28 被引用数 (Scopus)

抄録

To realize a user-friendly interface where a human and a computer can engage in face-to-face communication, the computer must be able to recognize the emotional state of the human by facial expressions, then synthesize and display a reasonable facial image in response. To describe this analysis and synthesis of facial expressions easily, the computer itself should have some kind of emotion model. By using a five-layered neural network which has generalization and superior nonlinear mapping performance, identity mapping training was performed using parameterized facial expressions. With respect to the space built in the middle layer of the five-layered neural network as an emotion model, emotion space was constructed. Based on this emotion space an attempt was made to build a system which can realize mappings from an expression to an emotion and from an emotion to an expression simultaneously. Moreover, to recognize a facial expression from the actual facial image of a human, the method of extracting the facial parameter from the facial points movement, is investigated.

本文言語English
ページ(範囲)95-107
ページ数13
ジャーナルSystems and Computers in Japan
25
13
出版ステータスPublished - 1994 11月 1
外部発表はい

ASJC Scopus subject areas

  • 理論的コンピュータサイエンス
  • 情報システム
  • ハードウェアとアーキテクチャ
  • 計算理論と計算数学

フィンガープリント

「Expression analysis/synthesis system based on emotion space constructed by multilayered neural network」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル