TY - GEN
T1 - Automatic Detection of Valves with Disaster Response Robot on Basis of Depth Camera Information
AU - Nishikawa, Keishi
AU - Ohya, Jun
AU - Matsuzawa, Takashi
AU - Takanishi, Atsuo
AU - Ogata, Hiroyuki
AU - Hashimoto, Kenji
N1 - Funding Information:
ACKNOWLEDGMENT The authors of this paper acknowledge to the support of Asaki Imai, Shunnsuke Kimura,Toshiki Kurosawa, Kazuya Miyakawa and Kanaki Nakao of Waseda University..
Publisher Copyright:
© 2018 IEEE.
PY - 2019/1/16
Y1 - 2019/1/16
N2 - In recent years, there has been an increasing demand for disaster response robots designed for working in disaster sites such as nuclear power plants where accidents have occurred. One of the tasks the robots need to complete at these kinds of sites is turning a valve. In order to employ robots to perform this task at real sites, it is desirable that the robots have autonomy for detecting the valves to be manipulated. In this paper, we propose a method that allows a disaster response robot to detect a valve, whose parameters such as position, orientation and size are unknown, based on information captured by a depth camera mounted on the robot. In our proposed algorithm, first the target valve is detected on the basis of an RGB image captured by the depth camera, and 3D point cloud data including the target is reconstructed by combining the detection result and the depth image. Second, the reconstructed point cloud data is processed to estimate parameters describing the target. Experiments were conducted on a simulator, and the results showed that our method could accurately estimate the parameters with a minimum error of 0.0230 m in position, 0.196 % in radius, and 0.00222 degree in orientation.
AB - In recent years, there has been an increasing demand for disaster response robots designed for working in disaster sites such as nuclear power plants where accidents have occurred. One of the tasks the robots need to complete at these kinds of sites is turning a valve. In order to employ robots to perform this task at real sites, it is desirable that the robots have autonomy for detecting the valves to be manipulated. In this paper, we propose a method that allows a disaster response robot to detect a valve, whose parameters such as position, orientation and size are unknown, based on information captured by a depth camera mounted on the robot. In our proposed algorithm, first the target valve is detected on the basis of an RGB image captured by the depth camera, and 3D point cloud data including the target is reconstructed by combining the detection result and the depth image. Second, the reconstructed point cloud data is processed to estimate parameters describing the target. Experiments were conducted on a simulator, and the results showed that our method could accurately estimate the parameters with a minimum error of 0.0230 m in position, 0.196 % in radius, and 0.00222 degree in orientation.
KW - 3D point cloud data
KW - disaster response robot
KW - object detection
UR - http://www.scopus.com/inward/record.url?scp=85062241149&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85062241149&partnerID=8YFLogxK
U2 - 10.1109/DICTA.2018.8615796
DO - 10.1109/DICTA.2018.8615796
M3 - Conference contribution
AN - SCOPUS:85062241149
T3 - 2018 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2018
BT - 2018 International Conference on Digital Image Computing
A2 - Pickering, Mark
A2 - Zheng, Lihong
A2 - You, Shaodi
A2 - Rahman, Ashfaqur
A2 - Murshed, Manzur
A2 - Asikuzzaman, Md
A2 - Natu, Ambarish
A2 - Robles-Kelly, Antonio
A2 - Paul, Manoranjan
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2018
Y2 - 10 December 2018 through 13 December 2018
ER -