Detecting features of tools, objects, and actions from effects in a robot using deep learning

Namiko Saito, Kitae Kim, Shingo Murata, Tetsuya Ogata, Shigeki Sugano

研究成果: Conference contribution

2 被引用数 (Scopus)

抄録

We propose a tool-use model that can detect the features of tools, target objects, and actions from the provided effects of object manipulation. We construct a model that enables robots to manipulate objects with tools, using infant learning as a concept. To realize this, we train sensory-motor data recorded during a tool-use task performed by a robot with deep learning. Experiments include four factors: (1) tools, (2) objects, (3) actions, and (4) effects, which the model considers simultaneously. For evaluation, the robot generates predicted images and motions given information of the effects of using unknown tools and objects. We confirm that the robot is capable of detecting features of tools, objects, and actions by learning the effects and executing the task.

本文言語English
ホスト出版物のタイトル2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2018
出版社Institute of Electrical and Electronics Engineers Inc.
ページ91-96
ページ数6
ISBN(電子版)9781538661109
DOI
出版ステータスPublished - 2018 9月
イベントJoint 8th IEEE International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2018 - Tokyo, Japan
継続期間: 2018 9月 162018 9月 20

出版物シリーズ

名前2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2018

Conference

ConferenceJoint 8th IEEE International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2018
国/地域Japan
CityTokyo
Period18/9/1618/9/20

ASJC Scopus subject areas

  • コンピュータ ビジョンおよびパターン認識
  • 制御と最適化
  • 行動神経科学
  • 発達神経科学
  • 人工知能

フィンガープリント

「Detecting features of tools, objects, and actions from effects in a robot using deep learning」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル