TY - GEN
T1 - Disaster Response Robot's Autonomous Manipulation of Valves in Disaster Sites Based on Visual Analyses of RGBD Images
AU - Nishikawa, Keishi
AU - Imai, Asaki
AU - Miyakawa, Kazuya
AU - Kanda, Takuya
AU - Matsuzawa, Takashi
AU - Hashimoto, Kenji
AU - Takanishi, Atsuo
AU - Ogata, Hiroyuki
AU - Ohya, Jun
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/11
Y1 - 2019/11
N2 - For building a disaster response robot, WAREC-l's fully-automated system for manipulating a valve, this paper proposes a method for (1) detecting a valve which is far away from the robot, (2) estimating the position and orientation for grasping the valve by the robot at a closer position. Our methods do not need any prior information about a valve for the above-mentioned detection and estimation for grasping. In addition, our estimation for grasping provides useful information, by which WAREC-I can rotate a valve autonomously. The method (1) uses the RGB image and the point cloud data captured by Multisense SL as the input, and estimate the position and orientation of a valve far away from the robot. The method (2) uses both the RGB and depth images captured by KinectV2 as input and estimate information for grasping the valve. Our experiments are conducted using a real disaster response robot. Our experimental results show the error of the estimation by the (a) two methods are small enough to achieve a fully-automated system for detecting and rotating the valve by WAREC-I.
AB - For building a disaster response robot, WAREC-l's fully-automated system for manipulating a valve, this paper proposes a method for (1) detecting a valve which is far away from the robot, (2) estimating the position and orientation for grasping the valve by the robot at a closer position. Our methods do not need any prior information about a valve for the above-mentioned detection and estimation for grasping. In addition, our estimation for grasping provides useful information, by which WAREC-I can rotate a valve autonomously. The method (1) uses the RGB image and the point cloud data captured by Multisense SL as the input, and estimate the position and orientation of a valve far away from the robot. The method (2) uses both the RGB and depth images captured by KinectV2 as input and estimate information for grasping the valve. Our experiments are conducted using a real disaster response robot. Our experimental results show the error of the estimation by the (a) two methods are small enough to achieve a fully-automated system for detecting and rotating the valve by WAREC-I.
UR - http://www.scopus.com/inward/record.url?scp=85081167488&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85081167488&partnerID=8YFLogxK
U2 - 10.1109/IROS40897.2019.8967586
DO - 10.1109/IROS40897.2019.8967586
M3 - Conference contribution
AN - SCOPUS:85081167488
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 4790
EP - 4797
BT - 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019
Y2 - 3 November 2019 through 8 November 2019
ER -