Multimodal corpora for human-machine interaction research

Satoshi Nakamura, Keiko Watanuki, Toshiyuki Takezawa, Satoru Hayamizu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In recent years human-machine interaction has increased its importance. One approach to an ideal human-machine interaction is develop a multi-modal system behaves like human-beings. This paper introduces an overview on multimodal corpora which are currently developed in Japan for the purpose. The paper describes database of 1)Multi-modal interaction, 2)Audio-visual speech, 3)Spoken dialogue with multiple speakers, 4)Gesture of sign language and 5)Sound scene data in real acoustic environments.

Original languageEnglish
Title of host publication6th International Conference on Spoken Language Processing, ICSLP 2000
PublisherInternational Speech Communication Association
ISBN (Electronic)7801501144, 9787801501141
Publication statusPublished - 2000
Externally publishedYes
Event6th International Conference on Spoken Language Processing, ICSLP 2000 - Beijing, China
Duration: 2000 Oct 162000 Oct 20

Publication series

Name6th International Conference on Spoken Language Processing, ICSLP 2000

Other

Other6th International Conference on Spoken Language Processing, ICSLP 2000
Country/TerritoryChina
CityBeijing
Period00/10/1600/10/20

ASJC Scopus subject areas

  • Linguistics and Language
  • Language and Linguistics

Fingerprint

Dive into the research topics of 'Multimodal corpora for human-machine interaction research'. Together they form a unique fingerprint.

Cite this