Inter-modality mapping in robot with recurrent neural network

Tetsuya Ogata*, Shun Nishide, Hideki Kozima, Kazunori Komatani, Hiroshi G. Okuno

*この研究の対応する著者

研究成果: Article査読

23 被引用数 (Scopus)

抄録

A system for mapping between different sensory modalities was developed for a robot system to enable it to generate motions expressing auditory signals and sounds generated by object movement. A recurrent neural network model with parametric bias, which has good generalization ability, is used as a learning model. Since the correspondences between auditory signals and visual signals are too numerous to memorize, the ability to generalize is indispensable. This system was implemented in the "Keepon" robot, and the robot was shown horizontal reciprocating or rotating motions with the sound of friction and falling or overturning motion with the sound of collision by manipulating a box object. Keepon behaved appropriately not only from learned events but also from unknown events and generated various sounds in accordance with observed motions.

本文言語English
ページ(範囲)1560-1569
ページ数10
ジャーナルPattern Recognition Letters
31
12
DOI
出版ステータスPublished - 2010 9月 1
外部発表はい

ASJC Scopus subject areas

  • ソフトウェア
  • 信号処理
  • コンピュータ ビジョンおよびパターン認識
  • 人工知能

フィンガープリント

「Inter-modality mapping in robot with recurrent neural network」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル