TY - GEN
T1 - End-to-End Mobile Robot Navigation using a Residual Deep Reinforcement Learning in Dynamic Human Environments
AU - Ahmed, Abdullah
AU - Mohammad, Yasser F.O.
AU - Parque, Victor
AU - El-Hussieny, Haitham
AU - Ahmed, Sabah
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Safe navigation through human crowds is key to enabling practical mobility ubiquitously. The Deep Reinforcement Learning (DRL) and the End-to-End (E2E) approaches to goal-oriented robot navigation have the potential to render policies able to tackle localization, path planning, obstacle avoidance, and adaptation to change in unison. In this paper, we report an architecture based on convolutional units and residual blocks being able to enhance adaptability to unseen and dynamic human environments. In particular, our scheme outperformed the state-of-the-art baselines SOADRL and NAVREP by about 13% and 18% on average success rate, respectively, throughout 27 unseen and dynamic navigation instances. Furthermore, our approach avoids the explicit encoding of positions and trajectories of moving humans compared to the standard models. Our results show the potential to render adaptive and generalizable policies for unknown and dynamic human environments.
AB - Safe navigation through human crowds is key to enabling practical mobility ubiquitously. The Deep Reinforcement Learning (DRL) and the End-to-End (E2E) approaches to goal-oriented robot navigation have the potential to render policies able to tackle localization, path planning, obstacle avoidance, and adaptation to change in unison. In this paper, we report an architecture based on convolutional units and residual blocks being able to enhance adaptability to unseen and dynamic human environments. In particular, our scheme outperformed the state-of-the-art baselines SOADRL and NAVREP by about 13% and 18% on average success rate, respectively, throughout 27 unseen and dynamic navigation instances. Furthermore, our approach avoids the explicit encoding of positions and trajectories of moving humans compared to the standard models. Our results show the potential to render adaptive and generalizable policies for unknown and dynamic human environments.
KW - Autonomous Navigation
KW - Convolutional Neural Networks
KW - Deep Reinforcement Learning
KW - Dynamic Environments
KW - End-to-End Learning
KW - Mobile Robots
UR - http://www.scopus.com/inward/record.url?scp=85146869482&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85146869482&partnerID=8YFLogxK
U2 - 10.1109/MESA55290.2022.10004394
DO - 10.1109/MESA55290.2022.10004394
M3 - Conference contribution
AN - SCOPUS:85146869482
T3 - MESA 2022 - 18th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications, Proceedings
BT - MESA 2022 - 18th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 18th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications, MESA 2022
Y2 - 28 November 2022 through 30 November 2022
ER -