A new method for real-time detection of facial expressions from time-sequential images is proposed. The proposed method does not need the tape marks that were pasted to the face for detecting expressions in real-time in the current implementation for the Virtual Space Teleconferencing. In the proposed method, four windows are applied to the four areas in the face image: the left and right eyes, mouth and forehead. Each window is divided into blocks that consist of 8 by 8 pixels. Discrete Cosine Transform (DCT) is applied to each block, and the feature vector of each window is obtained from taking the summations of the DCT energies in the horizontal, vertical and diagonal directions. By a conversion table, the feature vectors are related to real 3D movements in the face. Experiment show some promising results for accurate expression detection and for the realization of real-time hardware implementation of the proposed method.
|出版ステータス||Published - 1995 12月 1|
|イベント||Proceedings of the 1995 4th IEEE International Workshop on Robot and Human Communication, RO-MAN - Tokyo, Jpn|
継続期間: 1995 7月 5 → 1995 7月 7
|Other||Proceedings of the 1995 4th IEEE International Workshop on Robot and Human Communication, RO-MAN|
|Period||95/7/5 → 95/7/7|
ASJC Scopus subject areas