TY - GEN
T1 - Learning task space control through goal directed exploration
AU - Jamone, Lorenzo
AU - Natale, Lorenzo
AU - Hashimoto, Kenji
AU - Sandini, Giulio
AU - Takanishi, Atsuo
PY - 2011
Y1 - 2011
N2 - We present an autonomous goal-directed strategy to learn how to control a redundant robot in the task space. We discuss the advantages of exploring the state space through goal-directed actions defined in the task space (i.e. learning by trying to do) instead of performing motor babbling in the joints space, and we stress the importance of learning to be performed online, without any separation between training and execution. Our solution relies on learning the forward model and then inverting it for the control; different approaches to learn the forward model are described and compared. Experimental results on a simulated humanoid robot are provided to support our claims. The robot learns autonomously how to perform reaching actions directed toward 3D targets in task space by using arm and waist motion, not relying on any prior knowledge or initial motor babbling. To test the ability of the system to adapt to sudden changes both in the robot structure and in the perceived environment we artificially introduce two different kinds of kinematic perturbations: a modification of the length of one link and a rotation of the task space reference frame. Results demonstrate that the online update of the model allows the robot to cope with such situations.
AB - We present an autonomous goal-directed strategy to learn how to control a redundant robot in the task space. We discuss the advantages of exploring the state space through goal-directed actions defined in the task space (i.e. learning by trying to do) instead of performing motor babbling in the joints space, and we stress the importance of learning to be performed online, without any separation between training and execution. Our solution relies on learning the forward model and then inverting it for the control; different approaches to learn the forward model are described and compared. Experimental results on a simulated humanoid robot are provided to support our claims. The robot learns autonomously how to perform reaching actions directed toward 3D targets in task space by using arm and waist motion, not relying on any prior knowledge or initial motor babbling. To test the ability of the system to adapt to sudden changes both in the robot structure and in the perceived environment we artificially introduce two different kinds of kinematic perturbations: a modification of the length of one link and a rotation of the task space reference frame. Results demonstrate that the online update of the model allows the robot to cope with such situations.
UR - http://www.scopus.com/inward/record.url?scp=84860720009&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84860720009&partnerID=8YFLogxK
U2 - 10.1109/ROBIO.2011.6181368
DO - 10.1109/ROBIO.2011.6181368
M3 - Conference contribution
AN - SCOPUS:84860720009
SN - 9781457721373
T3 - 2011 IEEE International Conference on Robotics and Biomimetics, ROBIO 2011
SP - 702
EP - 708
BT - 2011 IEEE International Conference on Robotics and Biomimetics, ROBIO 2011
T2 - 2011 IEEE International Conference on Robotics and Biomimetics, ROBIO 2011
Y2 - 7 December 2011 through 11 December 2011
ER -