Construction of audio-visual speech corpus using motion-capture system and corpus based facial animation

Tatsuo Yotsukura*, Shigeo Morishima, Satoshi Nakamura

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

An accurate audio-visual speech corpus is inevitable for talking-heads research. This paper presents our audio-visual speech corpus collection and proposes a head-movement normalization method and a facial motion generation method. The audio-visual corpus contains speech data, movie data on faces, and positions and movements of facial organs. The corpus consists of Japanese phoneme-balanced sentences uttered by a female native speaker. An accurate facial capture is realized by using an optical motion-capture system. We captured high-resolution 3D data by arranging many markers on the speaker's face. In addition, we propose a method of acquiring the facial movements and removing head movements by using affine transformation for computing displacements of pure facial organs. Finally, in order to easily create facial animation from this motion data, we propose a technique assigning the captured data to the facial polygon model. Evaluation results demonstrate the effectiveness of the proposed facial motion generation method and show the relationship between the number of markers and errors.

Original languageEnglish
Pages (from-to)2477-2483
Number of pages7
JournalIEICE Transactions on Information and Systems
VolumeE88-D
Issue number11
DOIs
Publication statusPublished - 2005 Nov

Keywords

  • Audio-visual corpus
  • Facial animation
  • Motion capture
  • Talking head

ASJC Scopus subject areas

  • Software
  • Hardware and Architecture
  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Construction of audio-visual speech corpus using motion-capture system and corpus based facial animation'. Together they form a unique fingerprint.

Cite this