TY - JOUR
T1 - Multi-modal feature fusion for better understanding of human personality traits in social human–robot interaction
AU - Shen, Zhihao
AU - Elibol, Armagan
AU - Chong, Nak Young
N1 - Funding Information:
The authors are grateful for financial support from the Air Force Office of Scientific Research, United State under AFOSR-AOARD / FA2386-19-1-4015 and the Shibuya Science, Culture, and Sports Foundation 2019 Grant Program, Japan .
Publisher Copyright:
© 2021 Elsevier B.V.
PY - 2021/12
Y1 - 2021/12
N2 - Since the dynamic nature of human–robot interaction becomes increasingly prevalent in our daily life, there is a great demand for enabling the robot to better understand human personality traits and inspiring humans to be more engaged in the interaction with the robot. Therefore, in this work, as we design the paradigm of human–robot interaction as close to the real situation as possible, the following three main problems are addressed: (1) fusion of visual and audio features of human interaction modalities, (2) integration of variable length feature vectors, and (3) compensation of shaky camera motion caused by movements of the robot's communicative gesture. Specifically, the three most important visual features of humans including head motion, gaze, and body motion were extracted from a camera mounted on the robot performing verbal and body gestures during the interaction. Then, our system was geared to fuse the aforementioned visual features and different types of vocal features, such as voice pitch, voice energy, and Mel-Frequency Cepstral Coefficient, dealing with variable length multiple feature vectors. Lastly, considering unknown patterns and sequential characteristics of human communicative behavior, we proposed a multi-layer Hidden Markov Model that improved the classification accuracy of personality traits and offered notable advantages of fusing the multiple features. The results were thoroughly analyzed and supported by psychological studies. The proposed multi-modal fusion approach is expected to deepen the communicative competence of social robots interacting with humans from different cultures and backgrounds.
AB - Since the dynamic nature of human–robot interaction becomes increasingly prevalent in our daily life, there is a great demand for enabling the robot to better understand human personality traits and inspiring humans to be more engaged in the interaction with the robot. Therefore, in this work, as we design the paradigm of human–robot interaction as close to the real situation as possible, the following three main problems are addressed: (1) fusion of visual and audio features of human interaction modalities, (2) integration of variable length feature vectors, and (3) compensation of shaky camera motion caused by movements of the robot's communicative gesture. Specifically, the three most important visual features of humans including head motion, gaze, and body motion were extracted from a camera mounted on the robot performing verbal and body gestures during the interaction. Then, our system was geared to fuse the aforementioned visual features and different types of vocal features, such as voice pitch, voice energy, and Mel-Frequency Cepstral Coefficient, dealing with variable length multiple feature vectors. Lastly, considering unknown patterns and sequential characteristics of human communicative behavior, we proposed a multi-layer Hidden Markov Model that improved the classification accuracy of personality traits and offered notable advantages of fusing the multiple features. The results were thoroughly analyzed and supported by psychological studies. The proposed multi-modal fusion approach is expected to deepen the communicative competence of social robots interacting with humans from different cultures and backgrounds.
KW - Human personality traits
KW - Human–robot interaction
KW - Machine learning
KW - Multi-modal feature fusion
UR - http://www.scopus.com/inward/record.url?scp=85113553122&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85113553122&partnerID=8YFLogxK
U2 - 10.1016/j.robot.2021.103874
DO - 10.1016/j.robot.2021.103874
M3 - Article
AN - SCOPUS:85113553122
SN - 0921-8890
VL - 146
JO - Robotics and Autonomous Systems
JF - Robotics and Autonomous Systems
M1 - 103874
ER -