For robot manipulation, reinforcement learning has provided an effective end to end approach in controlling the complicated dynamic system. Model-free reinforcement learning methods ignore the model of system dynamics and are limited to simple behavior control. By contrast, model-based methods can quickly reach optimal trajectory planning by building a dynamic system model. However, it is not easy to build an accurate and efficient system model with high generalization ability, especially when facing complex dynamic system and various manipulation tasks. Furthermore, when the rewards provided by the environment are sparse, the agent will also lose effective guidance and fail to optimize the policy efficiently, which results in considerably decreased sample efficiency. In this paper, a model-based deep reinforcement learning algorithm, in which a deep neural network model is utilized to simulate the system dynamics, is designed for robot manipulation. The proposed deep neural network model is robust enough to deal with complex control tasks and possesses the generalization ability. Moreover, a curiosity-based experience replay method is incorporated to solve the sparse reward problem and improve the sample efficiency in reinforcement learning. The agent who manipulates a robotic hand, will be encouraged to explore optimal trajectories according to the failure experience. Simulation experiment results show great effectiveness of the proposed method. Various manipulation tasks are achieved successfully in such a complex dynamic system and the sample efficiency gets improved even in a sparse reward environment, as the learning time gets reduced considerably.
|ジャーナル||International Journal of Intelligent Robotics and Applications|
|出版ステータス||Published - 2020 6月 1|
ASJC Scopus subject areas
- コンピュータ サイエンスの応用