Decision-making is still a significant challenge to realize fully autonomous driving. Using deep reinforcement learning (DRL) to solve autonomous driving decision-making problems is a recent trend. A common method is to encode surrounding vehicles in a grid to describe the state space to help DRL network extract the scene features. However, in the process of human driving, surrounding vehicles at different positions have different contributions to decision-making. Meanwhile, the network is difficult to fully extract useful features in a sparse state, which can also lead to catastrophic actions. Therefore, this work proposes a spatial attention module to calculate different weights to represent different contributions to decision-making results. And a channel attention module is proposed to fully extract useful features in sparse state features. These two attention modules are integrated into dueling double deep Q network, named D3QN-DA, as a high-level decision-maker of a hierarchical hierarchical control structure based decision-making system. To improve agent performance, an emergency safe checker is introduced in this system. And the agent is trained and test with a designed reward function from safety and efficiency in simulation. The experimental results show that the proposed method increases the safety rate by 54%, and the average explore distance by 30%, which is safer and more intelligent for the decision-making process of automatic driving.
ASJC Scopus subject areas
- コンピュータ サイエンス（全般）