TY - JOUR
T1 - Leveraging motor babbling for efficient robot learning
AU - Kase, Kei
AU - Matsumoto, Noboru
AU - Ogata, Tetsuya
N1 - Funding Information:
This work was conducted at the Artificial Intelligence Research Center of the National Institute of Advanced Industrial Science and Technology, and is based on results obtained from a project, JPNP20006.
Publisher Copyright:
© Fuji Technology Press Ltd.
PY - 2021/10
Y1 - 2021/10
N2 - Deep robotic learning by learning from demonstration allows robots to mimic a given demonstration and generalize their performance to unknown task setups. However, this generalization ability is heavily affected by the number of demonstrations, which can be costly to manually generate. Without sufficient demonstrations, robots tend to overfit to the available demonstrations and lose the robustness offered by deep learning. Applying the concept of motor babbling – a process similar to that by which human infants move their bodies randomly to obtain proprioception – is also effective for allowing robots to enhance their generalization ability. Furthermore, the generation of babbling data is simpler than task-oriented demonstrations. Previous researches use motor babbling in the concept of pre-training and fine-tuning but have the problem of the babbling data being overwritten by the task data. In this work, we propose an RNNbased robot-control framework capable of leveraging targetless babbling data to aid the robot in acquiring proprioception and increasing the generalization ability of the learned task data by learning both babbling and task data simultaneously. Through simultaneous learning, our framework can use the dynamics obtained from babbling data to learn the target task efficiently. In the experiment, we prepare demonstrations of a block-picking task and aimless-babbling data. With our framework, the robot can learn tasks faster and show greater generalization ability when blocks are at unknown positions or move during execution.
AB - Deep robotic learning by learning from demonstration allows robots to mimic a given demonstration and generalize their performance to unknown task setups. However, this generalization ability is heavily affected by the number of demonstrations, which can be costly to manually generate. Without sufficient demonstrations, robots tend to overfit to the available demonstrations and lose the robustness offered by deep learning. Applying the concept of motor babbling – a process similar to that by which human infants move their bodies randomly to obtain proprioception – is also effective for allowing robots to enhance their generalization ability. Furthermore, the generation of babbling data is simpler than task-oriented demonstrations. Previous researches use motor babbling in the concept of pre-training and fine-tuning but have the problem of the babbling data being overwritten by the task data. In this work, we propose an RNNbased robot-control framework capable of leveraging targetless babbling data to aid the robot in acquiring proprioception and increasing the generalization ability of the learned task data by learning both babbling and task data simultaneously. Through simultaneous learning, our framework can use the dynamics obtained from babbling data to learn the target task efficiently. In the experiment, we prepare demonstrations of a block-picking task and aimless-babbling data. With our framework, the robot can learn tasks faster and show greater generalization ability when blocks are at unknown positions or move during execution.
KW - Motor babbling
KW - Predictive learning
KW - Robot learning
UR - http://www.scopus.com/inward/record.url?scp=85118512852&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85118512852&partnerID=8YFLogxK
U2 - 10.20965/jrm.2021.p1063
DO - 10.20965/jrm.2021.p1063
M3 - Article
AN - SCOPUS:85118512852
SN - 0915-3942
VL - 33
SP - 1063
EP - 1074
JO - Journal of Robotics and Mechatronics
JF - Journal of Robotics and Mechatronics
IS - 5
ER -