Expressive facial subspace construction from key face selection

Ryo Takamizawa*, Takanori Suzuki, Hiroyuki Kubo, Akinobu Maejima, Shigeo Morishima

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

1 Citation (Scopus)

Abstract

MoCap-based facial expression synthesis techniques have been applied to provide CG character with expressive and accurate facial expressions [Deng et al. 2006: Lau et al. 2007]. The representative performance of these techniques depends on the variety of captured facial expressions. It is also difficult to guess what expressions are needed to synthesize expressive face before capture. Therefore, much MoCap data are required to construct a subspace employing dimensional compression techniques, and then the space enables us to synthesize expressions with linear-combination of basis vectors of the space. However, it is hard work to take much facial MoCap data to obtain expressive result.

Original languageEnglish
DOIs
Publication statusPublished - 2009
EventSIGGRAPH 2009: Posters, SIGGRAPH '09 - New Orleans, LA, United States
Duration: 2009 Aug 32009 Aug 7

Conference

ConferenceSIGGRAPH 2009: Posters, SIGGRAPH '09
Country/TerritoryUnited States
CityNew Orleans, LA
Period09/8/309/8/7

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design
  • Human-Computer Interaction
  • Software

Fingerprint

Dive into the research topics of 'Expressive facial subspace construction from key face selection'. Together they form a unique fingerprint.

Cite this