Realtime face analysis and synthesis using neural network

Shigeo Morishima*

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review


In this paper, we describe a recent research results about how to generate an avatar's face on a real-time process exactly copying a real person's face. It is very important for synthesis of a real avatar to duplicate emotion and impression precisely included in original face image and voice. Face fitting tool from multi-angle camera images is introduced to make a real 3D face model with real texture and geometry very close to the original. When avatar is speaking something, voice signal is very essential to decide a mouth shape feature. So real-time mouth shape control mechanism is proposed by conversion from speech parameters to lip shape parameters using multilayered neural network. For dynamic modeling of facial expression, muscle structure constraint is introduced to generate a facial expression naturally with a few parameters. We also tried to get muscle parameters automatically to decide an expression from local motion vector on face calculated by optical flow in video sequence. Finally an approach that enables the modeling emotions appearing on faces. A system with this approach helps to analyze, synthesize and code face images at the emotional level.

Original languageEnglish
Number of pages10
Publication statusPublished - 2000 Dec 1
Externally publishedYes
Event10th IEEE Workshop on Neural Netwoks for Signal Processing (NNSP2000) - Sydney, Australia
Duration: 2000 Dec 112000 Dec 13


Other10th IEEE Workshop on Neural Netwoks for Signal Processing (NNSP2000)
CitySydney, Australia

ASJC Scopus subject areas

  • Signal Processing
  • Software
  • Electrical and Electronic Engineering


Dive into the research topics of 'Realtime face analysis and synthesis using neural network'. Together they form a unique fingerprint.

Cite this