TY - GEN
T1 - Multi-modal integration for personalized conversation
T2 - 2008 8th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2008
AU - Fujie, Shinya
AU - Watanabe, Daichi
AU - Ichikawa, Yuhi
AU - Taniyama, Hikaru
AU - Hosoya, Kosuke
AU - Matsuyama, Yoichi
AU - Kobayashi, Tetsunori
PY - 2008/12/1
Y1 - 2008/12/1
N2 - Humanoid with spoken language communication ability is proposed and developed. To make humanoid live with people, spoken language communication is fundamental because we use this kind of communication everyday. However, due to difficulties of speech recognition itself and implementation on the robot, a robot with such an ability has not been developed. In this study, we propose a robot with the technique implemented to overcome these problems. This proposed system includes three key features, image processing, sound source separation, and turn-taking timing control. Processing image captured with camera mounted on the robot's eyes enables to find and identify whom the robot should talked to. Sound source separation enables distant speech recognition, so that people need no special device, such as head-set microphones. Turn-taking timing control is often lacked in many conventional spoken dialogue system, but this is fundamental because the conversation proceeds in realtime. The effectiveness of these elements as well as the example of conversation are shown in experiments.
AB - Humanoid with spoken language communication ability is proposed and developed. To make humanoid live with people, spoken language communication is fundamental because we use this kind of communication everyday. However, due to difficulties of speech recognition itself and implementation on the robot, a robot with such an ability has not been developed. In this study, we propose a robot with the technique implemented to overcome these problems. This proposed system includes three key features, image processing, sound source separation, and turn-taking timing control. Processing image captured with camera mounted on the robot's eyes enables to find and identify whom the robot should talked to. Sound source separation enables distant speech recognition, so that people need no special device, such as head-set microphones. Turn-taking timing control is often lacked in many conventional spoken dialogue system, but this is fundamental because the conversation proceeds in realtime. The effectiveness of these elements as well as the example of conversation are shown in experiments.
UR - http://www.scopus.com/inward/record.url?scp=63549150003&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=63549150003&partnerID=8YFLogxK
U2 - 10.1109/ICHR.2008.4756014
DO - 10.1109/ICHR.2008.4756014
M3 - Conference contribution
AN - SCOPUS:63549150003
SN - 9781424428229
T3 - 2008 8th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2008
SP - 617
EP - 622
BT - 2008 8th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2008
Y2 - 1 December 2008 through 3 December 2008
ER -