TY - GEN
T1 - A multimodal human-machine interface enabling situation-Adaptive control inputs for highly automated vehicles
AU - Manawadu, Udara E.
AU - Kamezaki, Mitsuhiro
AU - Ishikawa, Masaaki
AU - Kawano, Takahiro
AU - Sugano, Shigeki
N1 - Funding Information:
ACKNOWLEDGMENT This research was supported by MEXT Japan, JSPS KAKENHI Grant Numbers 16K06196, and by the Research Institute of Science and Engineering, Waseda University.
Publisher Copyright:
© 2017 IEEE.
PY - 2017/7/28
Y1 - 2017/7/28
N2 - Intelligent vehicles operating in different levels of automation require the driver to fully or partially conduct the dynamic driving task (DDT) and to conduct fallback performance of the DDT, during a trip. Such vehicles create the need for novel human-machine interfaces (HMIs) designed to conduct high-level vehicle control tasks. Multimodal interfaces (MMIs) have advantages such as improved recognition, faster interaction, and situation-Adaptability, over unimodal interfaces. In this study, we developed and evaluated a MMI system with three input modalities; touchscreen, hand-gesture, and haptic to input tactical-level control commands (e.g. lane-changing, overtaking, and parking). We conducted driving experiments in a driving simulator to evaluate the effectiveness of the MMI system. The results show that multimodal HMI significantly reduced the driver workload, improved the efficiency of interaction, and minimized input errors compared with unimodal interfaces. Moreover, we discovered relationships between input types and modalities: location-based inputs-Touchscreen interface, time-critical inputs-haptic interface. The results proved the functional advantages and effectiveness of multimodal interface system over its unimodal components for conducting tactical-level driving tasks.
AB - Intelligent vehicles operating in different levels of automation require the driver to fully or partially conduct the dynamic driving task (DDT) and to conduct fallback performance of the DDT, during a trip. Such vehicles create the need for novel human-machine interfaces (HMIs) designed to conduct high-level vehicle control tasks. Multimodal interfaces (MMIs) have advantages such as improved recognition, faster interaction, and situation-Adaptability, over unimodal interfaces. In this study, we developed and evaluated a MMI system with three input modalities; touchscreen, hand-gesture, and haptic to input tactical-level control commands (e.g. lane-changing, overtaking, and parking). We conducted driving experiments in a driving simulator to evaluate the effectiveness of the MMI system. The results show that multimodal HMI significantly reduced the driver workload, improved the efficiency of interaction, and minimized input errors compared with unimodal interfaces. Moreover, we discovered relationships between input types and modalities: location-based inputs-Touchscreen interface, time-critical inputs-haptic interface. The results proved the functional advantages and effectiveness of multimodal interface system over its unimodal components for conducting tactical-level driving tasks.
UR - http://www.scopus.com/inward/record.url?scp=85028079762&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85028079762&partnerID=8YFLogxK
U2 - 10.1109/IVS.2017.7995875
DO - 10.1109/IVS.2017.7995875
M3 - Conference contribution
AN - SCOPUS:85028079762
T3 - IEEE Intelligent Vehicles Symposium, Proceedings
SP - 1195
EP - 1200
BT - IV 2017 - 28th IEEE Intelligent Vehicles Symposium
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 28th IEEE Intelligent Vehicles Symposium, IV 2017
Y2 - 11 June 2017 through 14 June 2017
ER -