抄録
A report on synthesis of virtual human or avatar to project the features with a realistic texture-mapped face to generate facial expression and action controlled by a multimodal input signal is reported. The report covers the face fitting tool from multiview camera images to make 3-D face model and the voice signal is used to determine the mouth shape feature when an avatar is speaking.
本文言語 | English |
---|---|
ページ(範囲) | 26-34 |
ページ数 | 9 |
ジャーナル | IEEE Signal Processing Magazine |
巻 | 18 |
号 | 3 |
DOI | |
出版ステータス | Published - 2001 5月 |
外部発表 | はい |
ASJC Scopus subject areas
- 信号処理
- 電子工学および電気工学
- 応用数学