TY - GEN
T1 - Analysis of effective environmental-camera images using virtual environment for advanced unmanned construction
AU - Yang, Junjie
AU - Kamezaki, Mitsuhiro
AU - Iwata, Hiroyasu
AU - Sugano, Shigeki
PY - 2014
Y1 - 2014
N2 - Unmanned construction machines are used after disasters. Compared with manned construction, time efficiency is lower because of incomplete visual information, communication delay, and lack of tactile experience. Visual information is the most fundamental items for planning and judgment, however, in current vision systems, even the posture and zoom of cameras are not adjusted. To improve operator's visibility, these parameters must be adjusted in accordance with the work situation. The purpose of this study is thus to analyze effective camera images from some comparison experiments, as a fundamental study of advanced visual support. We first developed a virtual reality simulator to enable experimental conditions to be modified easier. To effectively derive required images, experiments with two different camera positions and systems (fixed cameras and manually controllable cameras) were then conducted. The results indicate that enlarged views to show the manipulator is needed in object grasping and tracking images to show the movement direction of the manipulator is needed in largely end-point movement. The result also confirms that the operational accuracy increases and blind spot rate decreases by using the manual system, compared with fixed system.
AB - Unmanned construction machines are used after disasters. Compared with manned construction, time efficiency is lower because of incomplete visual information, communication delay, and lack of tactile experience. Visual information is the most fundamental items for planning and judgment, however, in current vision systems, even the posture and zoom of cameras are not adjusted. To improve operator's visibility, these parameters must be adjusted in accordance with the work situation. The purpose of this study is thus to analyze effective camera images from some comparison experiments, as a fundamental study of advanced visual support. We first developed a virtual reality simulator to enable experimental conditions to be modified easier. To effectively derive required images, experiments with two different camera positions and systems (fixed cameras and manually controllable cameras) were then conducted. The results indicate that enlarged views to show the manipulator is needed in object grasping and tracking images to show the movement direction of the manipulator is needed in largely end-point movement. The result also confirms that the operational accuracy increases and blind spot rate decreases by using the manual system, compared with fixed system.
UR - http://www.scopus.com/inward/record.url?scp=84906658721&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84906658721&partnerID=8YFLogxK
U2 - 10.1109/AIM.2014.6878155
DO - 10.1109/AIM.2014.6878155
M3 - Conference contribution
AN - SCOPUS:84906658721
SN - 9781479957361
T3 - IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM
SP - 664
EP - 669
BT - AIM 2014 - IEEE/ASME International Conference on Advanced Intelligent Mechatronics
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2014 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM 2014
Y2 - 8 July 2014 through 11 July 2014
ER -