TY - GEN
T1 - Variable in-hand manipulations for tactile-driven robot hand via CNN-LSTM
AU - Funabashi, Satoshi
AU - Ogasa, Shun
AU - Isobe, Tomoki
AU - Ogata, Tetsuya
AU - Schmitz, Alexander
AU - Tomo, Tito Pradhono
AU - Sugano, Shigeki
N1 - Funding Information:
This research was supported by the JSPS Grant-in-Aid No. 19H02116, No. 19H01130, the JST ACT-I Information and Future No. 50185, the Tateishi Science and Technology Foundation Research Grant (S), and the Research Institute for Science and Engineering, Waseda University.
Publisher Copyright:
© 2020 IEEE.
PY - 2020/10/24
Y1 - 2020/10/24
N2 - Performing various in-hand manipulation tasks, without learning each individual task, would enable robots to act more versatile, while reducing the effort for training. However, in general it is difficult to achieve stable in-hand manipulation, because the contact state between the fingertips becomes difficult to model, especially for a robot hand with anthropomorphically shaped fingertips. Rich tactile feedback can aid the robust task execution, but on the other hand it is challenging to process high-dimensional tactile information. In the current paper we use two fingers of the Allegro hand, and each fingertip is anthropomorphically shaped and equipped not only with 6-axis force-torque (F/T) sensors, but also with uSkin tactile sensors, which provide 24 tri-axial measurements per fingertip. A convolutional neural network is used to process the high dimensional uSkin information, and a long short-term memory (LSTM) handles the time-series information. The network is trained to generate two different motions ("twist"and "push"). The desired motion is provided as a task-parameter to the network, with twist defined as -1 and push as +1. When values between -1 and +1 are used as the task parameter, the network is able to generate untrained motions in-between the two trained motions. Thereby, we can achieve multiple untrained manipulations, and can achieve robustness with high-dimensional tactile feedback.
AB - Performing various in-hand manipulation tasks, without learning each individual task, would enable robots to act more versatile, while reducing the effort for training. However, in general it is difficult to achieve stable in-hand manipulation, because the contact state between the fingertips becomes difficult to model, especially for a robot hand with anthropomorphically shaped fingertips. Rich tactile feedback can aid the robust task execution, but on the other hand it is challenging to process high-dimensional tactile information. In the current paper we use two fingers of the Allegro hand, and each fingertip is anthropomorphically shaped and equipped not only with 6-axis force-torque (F/T) sensors, but also with uSkin tactile sensors, which provide 24 tri-axial measurements per fingertip. A convolutional neural network is used to process the high dimensional uSkin information, and a long short-term memory (LSTM) handles the time-series information. The network is trained to generate two different motions ("twist"and "push"). The desired motion is provided as a task-parameter to the network, with twist defined as -1 and push as +1. When values between -1 and +1 are used as the task parameter, the network is able to generate untrained motions in-between the two trained motions. Thereby, we can achieve multiple untrained manipulations, and can achieve robustness with high-dimensional tactile feedback.
KW - Multi-in-hand manipulation
KW - Neural networks
KW - Tactile sensing
UR - http://www.scopus.com/inward/record.url?scp=85102414459&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85102414459&partnerID=8YFLogxK
U2 - 10.1109/IROS45743.2020.9341484
DO - 10.1109/IROS45743.2020.9341484
M3 - Conference contribution
AN - SCOPUS:85102414459
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 9472
EP - 9479
BT - 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020
Y2 - 24 October 2020 through 24 January 2021
ER -