TY - GEN
T1 - Tactile object recognition using deep learning and dropout
AU - Schmitz, Alexander
AU - Bansho, Yusuke
AU - Noda, Kuniaki
AU - Iwata, Hiroyasu
AU - Ogata, Tetsuya
AU - Sugano, Shigeki
PY - 2015/2/12
Y1 - 2015/2/12
N2 - Recognizing grasped objects with tactile sensors is beneficial in many situations, as other sensor information like vision is not always reliable. In this paper, we aim for multimodal object recognition by power grasping of objects with an unknown orientation and position relation to the hand. Few robots have the necessary tactile sensors to reliably recognize objects: in this study the multifingered hand of TWENDY-ONE is used, which has distributed skin sensors covering most of the hand, 6 axis F/T sensors in each fingertip, and provides information about the joint angles. Moreover, the hand is compliant. When using tactile sensors, it is not clear what kinds of features are useful for object recognition. Recently, deep learning has shown promising results. Nevertheless, deep learning has rarely been used in robotics and to our best knowledge never for tactile sensing, probably because it is difficult to gather many samples with tactile sensors. Our results show a clear improvement when using a denoising autoencoder with dropout compared to traditional neural networks. Nevertheless, a higher number of layers did not prove to be beneficial.
AB - Recognizing grasped objects with tactile sensors is beneficial in many situations, as other sensor information like vision is not always reliable. In this paper, we aim for multimodal object recognition by power grasping of objects with an unknown orientation and position relation to the hand. Few robots have the necessary tactile sensors to reliably recognize objects: in this study the multifingered hand of TWENDY-ONE is used, which has distributed skin sensors covering most of the hand, 6 axis F/T sensors in each fingertip, and provides information about the joint angles. Moreover, the hand is compliant. When using tactile sensors, it is not clear what kinds of features are useful for object recognition. Recently, deep learning has shown promising results. Nevertheless, deep learning has rarely been used in robotics and to our best knowledge never for tactile sensing, probably because it is difficult to gather many samples with tactile sensors. Our results show a clear improvement when using a denoising autoencoder with dropout compared to traditional neural networks. Nevertheless, a higher number of layers did not prove to be beneficial.
UR - http://www.scopus.com/inward/record.url?scp=84945179931&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84945179931&partnerID=8YFLogxK
U2 - 10.1109/HUMANOIDS.2014.7041493
DO - 10.1109/HUMANOIDS.2014.7041493
M3 - Conference contribution
AN - SCOPUS:84945179931
T3 - IEEE-RAS International Conference on Humanoid Robots
SP - 1044
EP - 1050
BT - 2014 IEEE-RAS International Conference on Humanoid Robots, Humanoids 2014
PB - IEEE Computer Society
T2 - 2014 14th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2014
Y2 - 18 November 2014 through 20 November 2014
ER -