A MULTIMODAL DATABASE OF GESTURES AND SPEECH

Satoru Hayamizu*, Shigeki Nagaya, Keiko Watanuki, Masayuki Nakazawa, Shuichi Nobe, Takashi Yoshimura

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

2 Citations (Scopus)

Abstract

This paper describes a multimodal database which consists of image data of human gestures and corresponding speech data for the research on multimodal interaction systems. The purpose of this database is to provide an underlying foundation for research and development of multimodal interactive systems. Our primary concern in selecting utterances and gestures for inclusion in the database was to ascertain the kinds of expressions and gestures that artificial systems could produce and recognize. Total 25 kinds of gestures and speech were repeated four times for the recording of each subject. The speech and gestures for a total of 48 subjects were recorded, converted into files and in the first version, the files for 12 subjects were recorded on CD-ROMs.

Original languageEnglish
Pages2247-2250
Number of pages4
Publication statusPublished - 1999
Externally publishedYes
Event6th European Conference on Speech Communication and Technology, EUROSPEECH 1999 - Budapest, Hungary
Duration: 1999 Sept 51999 Sept 9

Conference

Conference6th European Conference on Speech Communication and Technology, EUROSPEECH 1999
Country/TerritoryHungary
CityBudapest
Period99/9/599/9/9

Keywords

  • database
  • gesture
  • multimodal

ASJC Scopus subject areas

  • Computer Science Applications
  • Software
  • Linguistics and Language
  • Communication

Fingerprint

Dive into the research topics of 'A MULTIMODAL DATABASE OF GESTURES AND SPEECH'. Together they form a unique fingerprint.

Cite this