Internet communication using real-time facial expression analysis and synthesis

Naiwala P. Chandrasiri*, Takeshi Naemura, Mitsuru Ishizuka, Hiroshi Harashima, István Barakonyi

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

10 Citations (Scopus)

Abstract

A system that animates 3D facial agents based on real-time facial expression analysis techniques and research or synthesizing facial expressions and text-to-speech capabilities is now available. The system consists of three main modules, including, a real-time facial expression analysis compoent that calculates the MPEG-4 FAP2, an effective 3D agent with facial expression synthesis and text-to-speech capabilities, and a communication module. Subjective evaluations involving graduate and undergraduate students confirm the communication system's effectiveness. Potential applications include virtual teleconferencing, entertainment, computer games, human-to-human communication training, and distance learning.

Original languageEnglish
Pages (from-to)20-29
Number of pages10
JournalIEEE Multimedia
Volume11
Issue number3
DOIs
Publication statusPublished - 2004 Jul
Externally publishedYes

ASJC Scopus subject areas

  • Hardware and Architecture
  • Information Systems
  • Computer Graphics and Computer-Aided Design
  • Software
  • Theoretical Computer Science
  • Computational Theory and Mathematics

Fingerprint

Dive into the research topics of 'Internet communication using real-time facial expression analysis and synthesis'. Together they form a unique fingerprint.

Cite this