TY - GEN
T1 - Versatile In-Hand Manipulation of Objects with Different Sizes and Shapes Using Neural Networks
AU - Funabashi, Satoshi
AU - Schmitz, Alexander
AU - Sato, Takashi
AU - Somlor, Sophon
AU - Sugano, Shigeki
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/7/2
Y1 - 2018/7/2
N2 - Changing the grasping posture of objects within a robot hand is hard to achieve, especially if the objects are of various shape and size. In this paper we use a neural network to learn such manipulation with variously sized and shaped objects. The TWENDY-ONE hand possesses various properties that are effective for in-hand manipulation: a high number of actuated joints, passive degrees of freedom and soft skin, six-axis force/torque (F /T) sensors in each fingertip and distributed tactile sensors in the soft skin. The object size information is extracted from the initial grasping posture. The training data includes tactile and the object information. After training the neural network, the robot is able to manipulate objects of not only trained but also untrained size and shape. The results show the importance of size and tactile information. Importantly, the features extracted by a stacked autoencoder (trained with a larger dataset) could reduce the number of required training samples for supervised learning of in-hand manipulation.
AB - Changing the grasping posture of objects within a robot hand is hard to achieve, especially if the objects are of various shape and size. In this paper we use a neural network to learn such manipulation with variously sized and shaped objects. The TWENDY-ONE hand possesses various properties that are effective for in-hand manipulation: a high number of actuated joints, passive degrees of freedom and soft skin, six-axis force/torque (F /T) sensors in each fingertip and distributed tactile sensors in the soft skin. The object size information is extracted from the initial grasping posture. The training data includes tactile and the object information. After training the neural network, the robot is able to manipulate objects of not only trained but also untrained size and shape. The results show the importance of size and tactile information. Importantly, the features extracted by a stacked autoencoder (trained with a larger dataset) could reduce the number of required training samples for supervised learning of in-hand manipulation.
UR - http://www.scopus.com/inward/record.url?scp=85062258028&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85062258028&partnerID=8YFLogxK
U2 - 10.1109/HUMANOIDS.2018.8624961
DO - 10.1109/HUMANOIDS.2018.8624961
M3 - Conference contribution
AN - SCOPUS:85062258028
T3 - IEEE-RAS International Conference on Humanoid Robots
SP - 768
EP - 775
BT - 2018 IEEE-RAS 18th International Conference on Humanoid Robots, Humanoids 2018
PB - IEEE Computer Society
T2 - 18th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2018
Y2 - 6 November 2018 through 9 November 2018
ER -