TY - JOUR
T1 - Visibility Enhancement using Autonomous Multicamera Controls with Situational Role Assignment for Teleoperated Work Machines
AU - Kamezaki, Mitsuhiro
AU - Yang, Junjie
AU - Iwata, Hiroyasu
AU - Sugano, Shigeki
N1 - Publisher Copyright:
© 2015 Wiley Periodicals, Inc.
PY - 2016/9/1
Y1 - 2016/9/1
N2 - The aim of this study is to provide a machine operator with enhanced visibility and more adaptive visual information suited to the work situation, particularly advanced unmanned construction. Toward that end, we propose a method for autonomously controlling multiple environmental cameras. Situations in which the yaw, pitch, and zoom of cameras should be controlled are analyzed. Additionally, we define imaging objects, including the machine, manipulators, and end points; and imaging modes, including tracking, zoom, posture, and trajectory modes. To control each camera simply and effectively, four practical camera roles with different combinations of the imaging objects and modes were defined: overview machine, enlarge end point, posture-manipulator, and trajectory-manipulator. A real-time role assignment system is described for assigning the four camera roles to four out of six cameras suitable for the work situation (e.g., reaching, grasping, transport, and releasing) on the basis of the assignment-priority rules. To test this system, debris-removal tasks were performed in a virtual reality simulation to compare performance among fixed camera, manual control camera, and autonomous control camera systems. The results showed that the autonomous system was the best of the three at decreasing the number of grasping misses and erroneous contacts and simultaneously increasing the subjective usability and time efficiency.
AB - The aim of this study is to provide a machine operator with enhanced visibility and more adaptive visual information suited to the work situation, particularly advanced unmanned construction. Toward that end, we propose a method for autonomously controlling multiple environmental cameras. Situations in which the yaw, pitch, and zoom of cameras should be controlled are analyzed. Additionally, we define imaging objects, including the machine, manipulators, and end points; and imaging modes, including tracking, zoom, posture, and trajectory modes. To control each camera simply and effectively, four practical camera roles with different combinations of the imaging objects and modes were defined: overview machine, enlarge end point, posture-manipulator, and trajectory-manipulator. A real-time role assignment system is described for assigning the four camera roles to four out of six cameras suitable for the work situation (e.g., reaching, grasping, transport, and releasing) on the basis of the assignment-priority rules. To test this system, debris-removal tasks were performed in a virtual reality simulation to compare performance among fixed camera, manual control camera, and autonomous control camera systems. The results showed that the autonomous system was the best of the three at decreasing the number of grasping misses and erroneous contacts and simultaneously increasing the subjective usability and time efficiency.
UR - http://www.scopus.com/inward/record.url?scp=84927546177&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84927546177&partnerID=8YFLogxK
U2 - 10.1002/rob.21580
DO - 10.1002/rob.21580
M3 - Article
AN - SCOPUS:84927546177
SN - 1556-4959
VL - 33
SP - 802
EP - 824
JO - Journal of Field Robotics
JF - Journal of Field Robotics
IS - 6
ER -