To realize a user-friendly interface where a human and a computer can engage in face-to-face communication, the computer must be able to recognize the emotional state of the human by facial expressions, then synthesize and display a reasonable facial image in response. To describe this analysis and synthesis of facial expressions easily, the computer itself should have some kind of emotion model. By using a five-layered neural network which has generalization and superior nonlinear mapping performance, identity mapping training was performed using parameterized facial expressions. With respect to the space built in the middle layer of the five-layered neural network as an emotion model, emotion space was constructed. Based on this emotion space an attempt was made to build a system which can realize mappings from an expression to an emotion and from an emotion to an expression simultaneously. Moreover, to recognize a facial expression from the actual facial image of a human, the method of extracting the facial parameter from the facial points movement, is investigated.
|Systems and Computers in Japan
|Published - 1994 11月 1
ASJC Scopus subject areas