Converting emotional voice to motion for robot telepresence

Angelica Lim*, Tetsuya Ogata, Hiroshi G. Okuno

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

19 Citations (Scopus)

Abstract

In this paper we present a new method for producing affective motion for humanoid robots. The NAO robot, like other humanoids, does not possess facial features to convey emotion. Instead, our proposed system generates pose-independent robot movement using a description of emotion through speed, intensity, regularity and extent (DESIRE). We show how the DESIRE framework can link the emotional content of voice and gesture, without the need for an emotion recognition system. Our results show that DESIRE movement can be used to effectively convey at least four emotions with user agreement 60-75%, and that voices converted to motion through SIRE maintained the same emotion significantly higher than chance, even across cultures (German to Japanese). Additionally, portrayals recognized as happiness were rated significantly easier to understand with motion over voice alone.

Original languageEnglish
Title of host publication2011 11th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS 2011
Pages472-479
Number of pages8
DOIs
Publication statusPublished - 2011
Externally publishedYes
Event2011 11th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS 2011 - Bled, Slovenia
Duration: 2011 Oct 262011 Oct 28

Publication series

NameIEEE-RAS International Conference on Humanoid Robots
ISSN (Print)2164-0572
ISSN (Electronic)2164-0580

Conference

Conference2011 11th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS 2011
Country/TerritorySlovenia
CityBled
Period11/10/2611/10/28

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Vision and Pattern Recognition
  • Hardware and Architecture
  • Human-Computer Interaction
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Converting emotional voice to motion for robot telepresence'. Together they form a unique fingerprint.

Cite this