TY - GEN
T1 - Self-organization of object features representing motion using Multiple Timescales Recurrent Neural Network
AU - Nishide, Shun
AU - Tani, Jun
AU - Okuno, Hiroshi G.
AU - Ogata, Tetsuya
PY - 2012
Y1 - 2012
N2 - Affordance theory suggests that humans recognize the environment based on invariants. Invariants are features that describe the environment offering behavioral information to humans. Two types of invariants exist, structural invariants and transformational invariants. In our previous paper, we developed a method that self- organizes transformational invariants, or motion features, from camera images based on robot's experiences. The model used a bi-directional technique combining a recurrent neural network for dynamics learning, namely Recurrent Neural Network with Parametric Bias (RNNPB), and a hierarchical neural network for feature extraction. The bi-directional training method developed in the previous work was effective in clustering the motion of objects, but the analysis did not give good segregation results of the self-organized features (transformational invariants) among different motion types. In this paper, we present a refined model which integrates dynamics learning and feature extraction in a single model. The refined model is comprised of Multiple Timescales Recurrent Neural Network (MTRNN), which possesses better learning capability than RNNPB. Self-organization result of four types of motions have proved the model's capability to create clusters of object motions. The analysis showed that the model extracted feature sequences with different characteristics for four object motion types.
AB - Affordance theory suggests that humans recognize the environment based on invariants. Invariants are features that describe the environment offering behavioral information to humans. Two types of invariants exist, structural invariants and transformational invariants. In our previous paper, we developed a method that self- organizes transformational invariants, or motion features, from camera images based on robot's experiences. The model used a bi-directional technique combining a recurrent neural network for dynamics learning, namely Recurrent Neural Network with Parametric Bias (RNNPB), and a hierarchical neural network for feature extraction. The bi-directional training method developed in the previous work was effective in clustering the motion of objects, but the analysis did not give good segregation results of the self-organized features (transformational invariants) among different motion types. In this paper, we present a refined model which integrates dynamics learning and feature extraction in a single model. The refined model is comprised of Multiple Timescales Recurrent Neural Network (MTRNN), which possesses better learning capability than RNNPB. Self-organization result of four types of motions have proved the model's capability to create clusters of object motions. The analysis showed that the model extracted feature sequences with different characteristics for four object motion types.
KW - Affordance Theory
KW - Feature Extraction
KW - Recurrent Neural Network
UR - http://www.scopus.com/inward/record.url?scp=84865089295&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84865089295&partnerID=8YFLogxK
U2 - 10.1109/IJCNN.2012.6252714
DO - 10.1109/IJCNN.2012.6252714
M3 - Conference contribution
AN - SCOPUS:84865089295
SN - 9781467314909
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - 2012 International Joint Conference on Neural Networks, IJCNN 2012
T2 - 2012 Annual International Joint Conference on Neural Networks, IJCNN 2012, Part of the 2012 IEEE World Congress on Computational Intelligence, WCCI 2012
Y2 - 10 June 2012 through 15 June 2012
ER -