Towards expressive musical robots: A cross-modal framework for emotional gesture, voice and music

Angelica Lim*, Tetsuya Ogata, Hiroshi G. Okuno

*この研究の対応する著者

研究成果: Article査読

22 被引用数 (Scopus)

抄録

It has been long speculated that expression of emotions from different modalities have the same underlying 'code', whether it be a dance step, musical phrase, or tone of voice. This is the first attempt to implement this theory across three modalities, inspired by the polyvalence and repeatability of robotics. We propose a unifying framework to generate emotions across voice, gesture, and music, by representing emotional states as a 4-parameter tuple of speed, intensity, regularity, and extent (SIRE). Our results show that a simple 4-tuple can capture four emotions recognizable at greater than chance across gesture and voice, and at least two emotions across all three modalities. An application for multi-modal, expressive music robots is discussed.

本文言語English
論文番号3
ジャーナルEurasip Journal on Audio, Speech, and Music Processing
2012
1
DOI
出版ステータスPublished - 2012
外部発表はい

ASJC Scopus subject areas

  • 音響学および超音波学
  • 電子工学および電気工学

フィンガープリント

「Towards expressive musical robots: A cross-modal framework for emotional gesture, voice and music」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル