TY - GEN
T1 - Emotion space for analysis and synthesis of facial expression
AU - Morishima, S.
AU - Harashima, H.
PY - 1993/1/1
Y1 - 1993/1/1
N2 - This paper presents a new emotion model which gives a criteria to decide human's emotion condition from the face image. Our final goal is to realize very natural and user-friendly human-machine communication environment by giving a face to computer terminal or communication system which can also understand the user's emotion condition. So it is necessary for the emotion model to express emotional meanings of a parameterized face expression and its motion quantitatively. Our emotion model is based on 5-layered neural network which has generalization and nonlinear mapping performance. Both input and output layer has the same number of units. So identity mapping can be realized and emotion space can be constructed in the middle-layer (3rd layer). The mapping from input layer to middle layer means emotion recognition and that from middle layer to output layer corresponds to expression synthesis from the emotion value. Training is performed by typical 13 emotion patterns which are expressed by expression parameters. Subjective test of this emotion space proves the propriety of this model. The facial action coding system is selected as an efficient criteria to describe delicate face expression and motion.
AB - This paper presents a new emotion model which gives a criteria to decide human's emotion condition from the face image. Our final goal is to realize very natural and user-friendly human-machine communication environment by giving a face to computer terminal or communication system which can also understand the user's emotion condition. So it is necessary for the emotion model to express emotional meanings of a parameterized face expression and its motion quantitatively. Our emotion model is based on 5-layered neural network which has generalization and nonlinear mapping performance. Both input and output layer has the same number of units. So identity mapping can be realized and emotion space can be constructed in the middle-layer (3rd layer). The mapping from input layer to middle layer means emotion recognition and that from middle layer to output layer corresponds to expression synthesis from the emotion value. Training is performed by typical 13 emotion patterns which are expressed by expression parameters. Subjective test of this emotion space proves the propriety of this model. The facial action coding system is selected as an efficient criteria to describe delicate face expression and motion.
UR - http://www.scopus.com/inward/record.url?scp=84855934190&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84855934190&partnerID=8YFLogxK
U2 - 10.1109/ROMAN.1993.367724
DO - 10.1109/ROMAN.1993.367724
M3 - Conference contribution
AN - SCOPUS:84855934190
T3 - Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993
SP - 188
EP - 193
BT - Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2nd IEEE International Workshop on Robot and Human Communication, RO-MAN 1993
Y2 - 3 November 1993 through 5 November 1993
ER -