TY - GEN
T1 - Intersensory causality modeling using deep neural networks
AU - Noda, Kuniaki
AU - Arie, Hiroaki
AU - Suga, Yuki
AU - Ogata, Tetsuya
PY - 2013/12/1
Y1 - 2013/12/1
N2 - Our brain is known to enhance perceptual precision and reduce ambiguity about sensory environment by integrating multiple sources of sensory information acquired from different modalities, such as vision, auditory and somatic sensation. From an engineering perspective, building a computational model that replicates this ability to integrate multimodal information and to self-organize the causal dependency among them, represents one of the central challenges in robotics. In this study, we propose such a model based on a deep learning framework and we evaluate the proposed model by conducting a bell ring task using a small humanoid robot. Our experimental results demonstrate that (1) the cross-modal memory retrieval function of the proposed method succeeds in generating visual sequence from the corresponding sound and bell ring motion, and (2) the proposed method leads to accurate causal dependencies among the sensory-motor sequence.
AB - Our brain is known to enhance perceptual precision and reduce ambiguity about sensory environment by integrating multiple sources of sensory information acquired from different modalities, such as vision, auditory and somatic sensation. From an engineering perspective, building a computational model that replicates this ability to integrate multimodal information and to self-organize the causal dependency among them, represents one of the central challenges in robotics. In this study, we propose such a model based on a deep learning framework and we evaluate the proposed model by conducting a bell ring task using a small humanoid robot. Our experimental results demonstrate that (1) the cross-modal memory retrieval function of the proposed method succeeds in generating visual sequence from the corresponding sound and bell ring motion, and (2) the proposed method leads to accurate causal dependencies among the sensory-motor sequence.
KW - Deep learning
KW - Multimodal integration
KW - Robotics
KW - Temporal sequence learning
UR - http://www.scopus.com/inward/record.url?scp=84893603100&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84893603100&partnerID=8YFLogxK
U2 - 10.1109/SMC.2013.342
DO - 10.1109/SMC.2013.342
M3 - Conference contribution
AN - SCOPUS:84893603100
SN - 9780769551548
T3 - Proceedings - 2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013
SP - 1995
EP - 2000
BT - Proceedings - 2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013
T2 - 2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013
Y2 - 13 October 2013 through 16 October 2013
ER -