TY - GEN
T1 - Improvement of audio-visual score following in robot ensemble with human guitarist
AU - Itohara, Tatsuhiko
AU - Nakadai, Kazuhiro
AU - Ogata, Tetsuya
AU - Okuno, Hiroshi G.
PY - 2012
Y1 - 2012
N2 - Our goal is to create an ensemble between human guitarists and music robots, e.g., singing and playing instruments robots. Such robots need to detect the tempo and beat time of the music. Score following and beat tracking, which requires and does not requires a score, are commonly used for this purpose. Score following is an incremental audio-to-score alignment. Although most score following methods assume that players have a precise score, most scores for guitarists have only melody and chord sequences without any beat patterns. An audio-visual beat tracking for guitarists is reported that improves the accuracy of beat detection. However, the result of this method is still insufficient because it uses only onset information, not pitch information, and because the hand tracking shows low accuracy. In this paper, we report a multimodal score following for a guitar performance, an extension of an audio-visual beat tracking method. The main difference is to use chord sequences to improve tracking of audio signals and depth information to improve tracking of guitar playing. Chord sequences are used for the calculation of chord correlation between the input and a score. Depth information is used in the guitar plane masking by three dimensional Hough transform, for the stable detection of a hand. Finally, the system extracts score positions and tempos by a particle-filter based integration of audio and visual features, The resulting score following system improves the tempo and the score position of a performance by 0.2 [sec] compared to an existing system.
AB - Our goal is to create an ensemble between human guitarists and music robots, e.g., singing and playing instruments robots. Such robots need to detect the tempo and beat time of the music. Score following and beat tracking, which requires and does not requires a score, are commonly used for this purpose. Score following is an incremental audio-to-score alignment. Although most score following methods assume that players have a precise score, most scores for guitarists have only melody and chord sequences without any beat patterns. An audio-visual beat tracking for guitarists is reported that improves the accuracy of beat detection. However, the result of this method is still insufficient because it uses only onset information, not pitch information, and because the hand tracking shows low accuracy. In this paper, we report a multimodal score following for a guitar performance, an extension of an audio-visual beat tracking method. The main difference is to use chord sequences to improve tracking of audio signals and depth information to improve tracking of guitar playing. Chord sequences are used for the calculation of chord correlation between the input and a score. Depth information is used in the guitar plane masking by three dimensional Hough transform, for the stable detection of a hand. Finally, the system extracts score positions and tempos by a particle-filter based integration of audio and visual features, The resulting score following system improves the tempo and the score position of a performance by 0.2 [sec] compared to an existing system.
UR - http://www.scopus.com/inward/record.url?scp=84891054713&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84891054713&partnerID=8YFLogxK
U2 - 10.1109/HUMANOIDS.2012.6651577
DO - 10.1109/HUMANOIDS.2012.6651577
M3 - Conference contribution
AN - SCOPUS:84891054713
SN - 9781467313698
T3 - IEEE-RAS International Conference on Humanoid Robots
SP - 574
EP - 579
BT - 2012 12th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2012
T2 - 2012 12th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2012
Y2 - 29 November 2012 through 1 December 2012
ER -