Tool-body assimilation model considering grasping motion through deep learning

Kuniyuki Takahashi*, Kitae Kim, Tetsuya Ogata, Shigeki Sugano

*この研究の対応する著者

研究成果: Article査読

36 被引用数 (Scopus)

抄録

We propose a tool-body assimilation model that considers grasping during motor babbling for using tools. A robot with tool-use skills can be useful in human–robot symbiosis because this allows the robot to expand its task performing abilities. Past studies that included tool-body assimilation approaches were mainly focused on obtaining the functions of the tools, and demonstrated the robot starting its motions with a tool pre-attached to the robot. This implies that the robot would not be able to decide whether and where to grasp the tool. In real life environments, robots would need to consider the possibilities of tool-grasping positions, and then grasp the tool. To address these issues, the robot performs motor babbling by grasping and nongrasping the tools to learn the robot's body model and tool functions. In addition, the robot grasps various parts of the tools to learn different tool functions from different grasping positions. The motion experiences are learned using deep learning. In model evaluation, the robot manipulates an object task without tools, and with several tools of different shapes. The robot generates motions after being shown the initial state and a target image, by deciding whether and where to grasp the tool. Therefore, the robot is capable of generating the correct motion and grasping decision when the initial state and a target image are provided to the robot.

本文言語English
ページ(範囲)115-127
ページ数13
ジャーナルRobotics and Autonomous Systems
91
DOI
出版ステータスPublished - 2017 5月 1

ASJC Scopus subject areas

  • ソフトウェア
  • 制御およびシステム工学
  • 数学 (全般)
  • コンピュータ サイエンスの応用

フィンガープリント

「Tool-body assimilation model considering grasping motion through deep learning」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル