Study of real time facial expression detection for virtual space teleconferencing

Kazuyuki Ebihara*, Jun Ohya, Fumio Kishino

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

5 Citations (Scopus)

Abstract

A new method for real-time detection of facial expressions from time-sequential images is proposed. The proposed method does not need the tape marks that were pasted to the face for detecting expressions in real-time in the current implementation for the Virtual Space Teleconferencing. In the proposed method, four windows are applied to the four areas in the face image: the left and right eyes, mouth and forehead. Each window is divided into blocks that consist of 8 by 8 pixels. Discrete Cosine Transform (DCT) is applied to each block, and the feature vector of each window is obtained from taking the summations of the DCT energies in the horizontal, vertical and diagonal directions. By a conversion table, the feature vectors are related to real 3D movements in the face. Experiment show some promising results for accurate expression detection and for the realization of real-time hardware implementation of the proposed method.

Original languageEnglish
Pages247-251
Number of pages5
Publication statusPublished - 1995 Dec 1
Externally publishedYes
EventProceedings of the 1995 4th IEEE International Workshop on Robot and Human Communication, RO-MAN - Tokyo, Jpn
Duration: 1995 Jul 51995 Jul 7

Other

OtherProceedings of the 1995 4th IEEE International Workshop on Robot and Human Communication, RO-MAN
CityTokyo, Jpn
Period95/7/595/7/7

ASJC Scopus subject areas

  • Hardware and Architecture
  • Software

Fingerprint

Dive into the research topics of 'Study of real time facial expression detection for virtual space teleconferencing'. Together they form a unique fingerprint.

Cite this