TY - GEN
T1 - Wiping 3D-objects using deep learning model based on image/force/joint information
AU - Saito, Namiko
AU - Wang, Danyang
AU - Ogata, Tetsuya
AU - Mori, Hiroki
AU - Sugano, Shigeki
N1 - Funding Information:
*This research was partially supported by the JSPS Grant-in-Aid for Scientific Research (A) No. 19H01130, and Research Institute for Science and Engineering of Waseda University.
Publisher Copyright:
© 2020 IEEE.
PY - 2020/10/24
Y1 - 2020/10/24
N2 - We propose a deep learning model for a robot to wipe 3D-objects. Wiping of 3D-objects requires recognizing the shapes of objects and planning the motor angle adjustments for tracing the objects. Unlike previous research, our learning model does not require pre-designed computational models of target objects. The robot is able to wipe the objects to be placed by using image, force, and arm joint information. We evaluate the generalization ability of the model by confirming that the robot handles untrained cube and bowl shaped-objects. We also find that it is necessary to use both image and force information to recognize the shape of and wipe 3D objects consistently by comparing changes in the input sensor data to the model. To our knowledge, this is the first work enabling a robot to use learning sensorimotor information alone to trace various unknown 3D-shape.
AB - We propose a deep learning model for a robot to wipe 3D-objects. Wiping of 3D-objects requires recognizing the shapes of objects and planning the motor angle adjustments for tracing the objects. Unlike previous research, our learning model does not require pre-designed computational models of target objects. The robot is able to wipe the objects to be placed by using image, force, and arm joint information. We evaluate the generalization ability of the model by confirming that the robot handles untrained cube and bowl shaped-objects. We also find that it is necessary to use both image and force information to recognize the shape of and wipe 3D objects consistently by comparing changes in the input sensor data to the model. To our knowledge, this is the first work enabling a robot to use learning sensorimotor information alone to trace various unknown 3D-shape.
UR - http://www.scopus.com/inward/record.url?scp=85102410554&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85102410554&partnerID=8YFLogxK
U2 - 10.1109/IROS45743.2020.9341275
DO - 10.1109/IROS45743.2020.9341275
M3 - Conference contribution
AN - SCOPUS:85102410554
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 10152
EP - 10157
BT - 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020
Y2 - 24 October 2020 through 24 January 2021
ER -