Visibility Enhancement using Autonomous Multicamera Controls with Situational Role Assignment for Teleoperated Work Machines

Mitsuhiro Kamezaki*, Junjie Yang, Hiroyasu Iwata, Shigeki Sugano

*この研究の対応する著者

研究成果: Article査読

25 被引用数 (Scopus)

抄録

The aim of this study is to provide a machine operator with enhanced visibility and more adaptive visual information suited to the work situation, particularly advanced unmanned construction. Toward that end, we propose a method for autonomously controlling multiple environmental cameras. Situations in which the yaw, pitch, and zoom of cameras should be controlled are analyzed. Additionally, we define imaging objects, including the machine, manipulators, and end points; and imaging modes, including tracking, zoom, posture, and trajectory modes. To control each camera simply and effectively, four practical camera roles with different combinations of the imaging objects and modes were defined: overview machine, enlarge end point, posture-manipulator, and trajectory-manipulator. A real-time role assignment system is described for assigning the four camera roles to four out of six cameras suitable for the work situation (e.g., reaching, grasping, transport, and releasing) on the basis of the assignment-priority rules. To test this system, debris-removal tasks were performed in a virtual reality simulation to compare performance among fixed camera, manual control camera, and autonomous control camera systems. The results showed that the autonomous system was the best of the three at decreasing the number of grasping misses and erroneous contacts and simultaneously increasing the subjective usability and time efficiency.

本文言語English
ページ(範囲)802-824
ページ数23
ジャーナルJournal of Field Robotics
33
6
DOI
出版ステータスPublished - 2016 9月 1

ASJC Scopus subject areas

  • 制御およびシステム工学
  • コンピュータ サイエンスの応用

フィンガープリント

「Visibility Enhancement using Autonomous Multicamera Controls with Situational Role Assignment for Teleoperated Work Machines」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル