Abstract
In this paper, we describe a recent research results about how to generate an avatar's face on a real-time process exactly copying a real person's face. It is very important for synthesis of a real avatar to duplicate emotion and impression precisely included in original face image and voice. Face fitting tool from multi-angle camera images is introduced to make a real 3D face model with real texture and geometry very close to the original. When avatar is speaking something, voice signal is very essential to decide a mouth shape feature. So real-time mouth shape control mechanism is proposed by conversion from speech parameters to lip shape parameters using multilayered neural network. For dynamic modeling of facial expression, muscle structure constraint is introduced to generate a facial expression naturally with a few parameters. We also tried to get muscle parameters automatically to decide an expression from local motion vector on face calculated by optical flow in video sequence. Finally an approach that enables the modeling emotions appearing on faces. A system with this approach helps to analyze, synthesize and code face images at the emotional level.
Original language | English |
---|---|
Pages | 13-22 |
Number of pages | 10 |
Publication status | Published - 2000 Dec 1 |
Externally published | Yes |
Event | 10th IEEE Workshop on Neural Netwoks for Signal Processing (NNSP2000) - Sydney, Australia Duration: 2000 Dec 11 → 2000 Dec 13 |
Other
Other | 10th IEEE Workshop on Neural Netwoks for Signal Processing (NNSP2000) |
---|---|
City | Sydney, Australia |
Period | 00/12/11 → 00/12/13 |
ASJC Scopus subject areas
- Signal Processing
- Software
- Electrical and Electronic Engineering