TY - JOUR
T1 - Tool-body assimilation model considering grasping motion through deep learning
AU - Takahashi, Kuniyuki
AU - Kim, Kitae
AU - Ogata, Tetsuya
AU - Sugano, Shigeki
N1 - Publisher Copyright:
© 2017 The Authors
PY - 2017/5/1
Y1 - 2017/5/1
N2 - We propose a tool-body assimilation model that considers grasping during motor babbling for using tools. A robot with tool-use skills can be useful in human–robot symbiosis because this allows the robot to expand its task performing abilities. Past studies that included tool-body assimilation approaches were mainly focused on obtaining the functions of the tools, and demonstrated the robot starting its motions with a tool pre-attached to the robot. This implies that the robot would not be able to decide whether and where to grasp the tool. In real life environments, robots would need to consider the possibilities of tool-grasping positions, and then grasp the tool. To address these issues, the robot performs motor babbling by grasping and nongrasping the tools to learn the robot's body model and tool functions. In addition, the robot grasps various parts of the tools to learn different tool functions from different grasping positions. The motion experiences are learned using deep learning. In model evaluation, the robot manipulates an object task without tools, and with several tools of different shapes. The robot generates motions after being shown the initial state and a target image, by deciding whether and where to grasp the tool. Therefore, the robot is capable of generating the correct motion and grasping decision when the initial state and a target image are provided to the robot.
AB - We propose a tool-body assimilation model that considers grasping during motor babbling for using tools. A robot with tool-use skills can be useful in human–robot symbiosis because this allows the robot to expand its task performing abilities. Past studies that included tool-body assimilation approaches were mainly focused on obtaining the functions of the tools, and demonstrated the robot starting its motions with a tool pre-attached to the robot. This implies that the robot would not be able to decide whether and where to grasp the tool. In real life environments, robots would need to consider the possibilities of tool-grasping positions, and then grasp the tool. To address these issues, the robot performs motor babbling by grasping and nongrasping the tools to learn the robot's body model and tool functions. In addition, the robot grasps various parts of the tools to learn different tool functions from different grasping positions. The motion experiences are learned using deep learning. In model evaluation, the robot manipulates an object task without tools, and with several tools of different shapes. The robot generates motions after being shown the initial state and a target image, by deciding whether and where to grasp the tool. Therefore, the robot is capable of generating the correct motion and grasping decision when the initial state and a target image are provided to the robot.
KW - Deep neural network
KW - Motor babbling
KW - Recurrent neural network
KW - Tool-body assimilation
KW - Transfer learning
UR - http://www.scopus.com/inward/record.url?scp=85015064478&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85015064478&partnerID=8YFLogxK
U2 - 10.1016/j.robot.2017.01.002
DO - 10.1016/j.robot.2017.01.002
M3 - Article
AN - SCOPUS:85015064478
SN - 0921-8890
VL - 91
SP - 115
EP - 127
JO - Robotics and Autonomous Systems
JF - Robotics and Autonomous Systems
ER -