This paper describes a robot who converses with multi-person using his multi-modal interface. The multi-person conversation includes many new problems, which are not cared in the conventional one-to-one conversation: such as information flow problems (recognizing who is speaking and to whom he is speaking/appealing to whom the system is speaking), space information sharing problem and turn holder estimation problem (estimating who is the next speaker). We solved these problems by utilizing multi-modal interface: face direction recognition, gesture recognition, sound direction recognition, speech recognition and gestural expression. The systematic combination of these functions realized human friendly multi-person conversation system.
|出版ステータス||Published - 1999|
|イベント||6th European Conference on Speech Communication and Technology, EUROSPEECH 1999 - Budapest, Hungary|
継続期間: 1999 9月 5 → 1999 9月 9
|Conference||6th European Conference on Speech Communication and Technology, EUROSPEECH 1999|
|Period||99/9/5 → 99/9/9|
ASJC Scopus subject areas
- コンピュータ サイエンスの応用