Robot motion control using listener's back-channels and head gesture information

Tsuyoshi Tasaki, Takeshi Yamaguchi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

A novel method is described for robot gestures and utterances during a dialogue based on the listener's understanding and interest, which are recognized from back-channels and head gestures. "Back-channels" are defined as sounds like 'uh-huh' uttered by a listener during a dialogue, and "head gestures" are defined as nod and tilt motions of the listener's head. The back-channels are recognized using sound features such as power and fundamental frequency. The head gestures are recognized using the movement of the skin-color area and the optical flow data. Based on the estimated understanding and interest of the listener, the speed and size of robot motions are changed. This method was implemented in a humanoid robot called SIG2. Experiments with six participants demonstrated that the proposed method enabled the robot to increase the listener's level of interest against the dialogue.

Original languageEnglish
Title of host publication8th International Conference on Spoken Language Processing, ICSLP 2004
PublisherInternational Speech Communication Association
Pages1033-1036
Number of pages4
Publication statusPublished - 2004
Externally publishedYes
Event8th International Conference on Spoken Language Processing, ICSLP 2004 - Jeju, Jeju Island, Korea, Republic of
Duration: 2004 Oct 42004 Oct 8

Other

Other8th International Conference on Spoken Language Processing, ICSLP 2004
Country/TerritoryKorea, Republic of
CityJeju, Jeju Island
Period04/10/404/10/8

ASJC Scopus subject areas

  • Language and Linguistics
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'Robot motion control using listener's back-channels and head gesture information'. Together they form a unique fingerprint.

Cite this