TY - GEN
T1 - Expressive humanoid robot for automatic accompaniment
AU - Xia, Guangyu
AU - Kawai, Mao
AU - Matsuki, Kei
AU - Fu, Mutian
AU - Cosentino, Sarah
AU - Trovato, Gabriele
AU - Dannenberg, Roger
AU - Sessa, Salvatore
AU - Takanishi, Atsuo
N1 - Publisher Copyright:
Copyright: © 2016 Guangyu Xia et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License 3.0 Unported, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
PY - 2019
Y1 - 2019
N2 - We present a music-robotic system capable of performing an accompaniment for a musician and reacting to human performance with gestural and facial expression in real time. This work can be seen as a marriage between social robotics and computer accompaniment systems in order to create more musical, interactive, and engaging performances between humans and machines. We also conduct subjective evaluations on audiences to validate the joint effects of robot expression and automatic accompaniment. Our results show that robot embodiment and expression improve the subjective ratings on automatic accompaniment significantly. Counterintuitively, such improvement does not exist when the machine is performing a fixed sequence and the human musician simply follows the machine. As far as we know, this is the first interactive music performance between a human musician and a humanoid music robot with systematic subjective evaluation.
AB - We present a music-robotic system capable of performing an accompaniment for a musician and reacting to human performance with gestural and facial expression in real time. This work can be seen as a marriage between social robotics and computer accompaniment systems in order to create more musical, interactive, and engaging performances between humans and machines. We also conduct subjective evaluations on audiences to validate the joint effects of robot expression and automatic accompaniment. Our results show that robot embodiment and expression improve the subjective ratings on automatic accompaniment significantly. Counterintuitively, such improvement does not exist when the machine is performing a fixed sequence and the human musician simply follows the machine. As far as we know, this is the first interactive music performance between a human musician and a humanoid music robot with systematic subjective evaluation.
UR - http://www.scopus.com/inward/record.url?scp=85074934240&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85074934240&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85074934240
T3 - SMC 2016 - 13th Sound and Music Computing Conference, Proceedings
SP - 506
EP - 511
BT - SMC 2016 - 13th Sound and Music Computing Conference, Proceedings
A2 - Grossmann, Rolf
A2 - Hajdu, Georg
PB - Zentrum fur Mikrotonale Musik und Multimediale Komposition (ZM4), Hochschule fur Musik und Theater
T2 - 13th Sound and Music Computing Conference, SMC 2016
Y2 - 31 August 2019 through 3 September 2019
ER -