TY - JOUR
T1 - Learning-based algorithms with application to urban scene autonomous driving
AU - Zhang, Shuwei
AU - Wu, Yutian
AU - Wang, Yichen
AU - Dong, Yifei
AU - Ogai, Harutoshi
AU - Tateno, Shigeyuki
N1 - Funding Information:
This work was supported by JST SPRING, Grant number JPMJSP2128.
Publisher Copyright:
© 2022, International Society of Artificial Life and Robotics (ISAROB).
PY - 2023/2
Y1 - 2023/2
N2 - Urban roads are one of the most complicated applications in autonomous driving. The main bottleneck lies in perception and decision-making algorithms. In this work, we propose a new learning-based autonomous driving system, including a novel Convolutional Neural Network (CNN)-based multi-sensor fusion object detector, and a novel Deep Reinforcement Learning (DRL)-based decision planner. Multi-sensor fusion object detector integrates two advanced CNN-based object detectors to separately detect objects from camera image and LiDAR point cloud with high precision and processing speed. Meanwhile, a stereo vision integrated Camera-LiDAR object fusion method is proposed to complementarily fuse two sensor detections. Besides, a DRL-based decision planner is proposed by integrating DRL-based tactical long-term decision-making and spatiotemporal short-term trajectory planning in dynamic urban driving scenarios with efficiency, safety and comfort. Finally, we train the algorithms and do joint testing in real scenarios. The experimental results show that the proposed system could meet the requirements of autonomous driving in urban scene.
AB - Urban roads are one of the most complicated applications in autonomous driving. The main bottleneck lies in perception and decision-making algorithms. In this work, we propose a new learning-based autonomous driving system, including a novel Convolutional Neural Network (CNN)-based multi-sensor fusion object detector, and a novel Deep Reinforcement Learning (DRL)-based decision planner. Multi-sensor fusion object detector integrates two advanced CNN-based object detectors to separately detect objects from camera image and LiDAR point cloud with high precision and processing speed. Meanwhile, a stereo vision integrated Camera-LiDAR object fusion method is proposed to complementarily fuse two sensor detections. Besides, a DRL-based decision planner is proposed by integrating DRL-based tactical long-term decision-making and spatiotemporal short-term trajectory planning in dynamic urban driving scenarios with efficiency, safety and comfort. Finally, we train the algorithms and do joint testing in real scenarios. The experimental results show that the proposed system could meet the requirements of autonomous driving in urban scene.
KW - Autonomous driving
KW - Decision-making and planning
KW - Deep learning
KW - Object detection
UR - http://www.scopus.com/inward/record.url?scp=85139942916&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85139942916&partnerID=8YFLogxK
U2 - 10.1007/s10015-022-00813-3
DO - 10.1007/s10015-022-00813-3
M3 - Article
AN - SCOPUS:85139942916
SN - 1433-5298
VL - 28
SP - 244
EP - 252
JO - Artificial Life and Robotics
JF - Artificial Life and Robotics
IS - 1
ER -