TY - GEN
T1 - Development of Human-Like Driving Decision Making Model based on Human Brain Mechanism
AU - Sakuma, Tsuyoshi
AU - Miura, Satoshi
AU - Miyashita, Tomoyuki
AU - Fujie, Masakatsu G.
AU - Sugano, Shigeki
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/4/25
Y1 - 2019/4/25
N2 - Recent driving assistance technologies such as Electronic Stability Control (ESC) and auto brake system release drivers from complicated driving tasks. On the other hand, there is concern that it reduces pleasure feelings of a driver if these system's behaviors are different from the driver's intention. To avoid such problem, it is important to evaluate the driver's intention and decision-making process, and design the assistance system to fit it. In this research, we propose an unsupervised reinforcement learning driver model based on human cognitive mechanism and human brain architecture. Because this study's objective is to analyze the process of driving decision making, we hire a simple actor-critic model as a driver model. We set learning parameters from the driver's decision making characteristics which are derived from the task execution process of the human brain, and set state space from driver's sensory characteristics. This driver model can predict lane change decision making adequately and shows high accuracy (ACC=94%) on verification tests with real driving data. This result is similar to unpublished results of a deep neural network driver model which use the same data as teaching data. From these results, we consider that the proposed reward function and learned state space represent the driver's decision making characteristics.
AB - Recent driving assistance technologies such as Electronic Stability Control (ESC) and auto brake system release drivers from complicated driving tasks. On the other hand, there is concern that it reduces pleasure feelings of a driver if these system's behaviors are different from the driver's intention. To avoid such problem, it is important to evaluate the driver's intention and decision-making process, and design the assistance system to fit it. In this research, we propose an unsupervised reinforcement learning driver model based on human cognitive mechanism and human brain architecture. Because this study's objective is to analyze the process of driving decision making, we hire a simple actor-critic model as a driver model. We set learning parameters from the driver's decision making characteristics which are derived from the task execution process of the human brain, and set state space from driver's sensory characteristics. This driver model can predict lane change decision making adequately and shows high accuracy (ACC=94%) on verification tests with real driving data. This result is similar to unpublished results of a deep neural network driver model which use the same data as teaching data. From these results, we consider that the proposed reward function and learned state space represent the driver's decision making characteristics.
UR - http://www.scopus.com/inward/record.url?scp=85065641994&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85065641994&partnerID=8YFLogxK
U2 - 10.1109/SII.2019.8700430
DO - 10.1109/SII.2019.8700430
M3 - Conference contribution
AN - SCOPUS:85065641994
T3 - Proceedings of the 2019 IEEE/SICE International Symposium on System Integration, SII 2019
SP - 770
EP - 775
BT - Proceedings of the 2019 IEEE/SICE International Symposium on System Integration, SII 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 IEEE/SICE International Symposium on System Integration, SII 2019
Y2 - 14 January 2019 through 16 January 2019
ER -