TY - GEN
T1 - Dynamical linking of positive and negative sentences to goal-oriented robot behavior by hierarchical RNN
AU - Yamada, Tatsuro
AU - Murata, Shingo
AU - Arie, Hiroaki
AU - Ogata, Tetsuya
N1 - Funding Information:
This work was supported by a Grant-in-Aid for Scientific Research on Innovative Areas “Constructive Developmental Science” (24119003), CREST, JST, and a Grant-in-Aid for Young Scientists (B) (26870649).
Publisher Copyright:
© Springer International Publishing Switzerland 2016.
PY - 2016
Y1 - 2016
N2 - Meanings of language expressions are constructed not only from words grounded in real-world matters, but also from words such as “not” that participate in the construction by working as logical operators. This study proposes a connectionist method for learning and internally representing functions that deal with both of these word groups, and grounding sentences constructed from them in corresponding behaviors just by experiencing raw sequential data of an imposed task. In the experiment, a robot implemented with a recurrent neural network is required to ground imperative positive and negative sentences given as a sequence of words in corresponding goal-oriented behavior. Analysis of the internal representations reveals that the network fulfilled the requirement by extracting XOR problems implicitly included in the target sequences and solving them by learning to represent the logical operations in its nonlinear dynamics in a self-organizing manner.
AB - Meanings of language expressions are constructed not only from words grounded in real-world matters, but also from words such as “not” that participate in the construction by working as logical operators. This study proposes a connectionist method for learning and internally representing functions that deal with both of these word groups, and grounding sentences constructed from them in corresponding behaviors just by experiencing raw sequential data of an imposed task. In the experiment, a robot implemented with a recurrent neural network is required to ground imperative positive and negative sentences given as a sequence of words in corresponding goal-oriented behavior. Analysis of the internal representations reveals that the network fulfilled the requirement by extracting XOR problems implicitly included in the target sequences and solving them by learning to represent the logical operations in its nonlinear dynamics in a self-organizing manner.
KW - Human–robot interaction
KW - Logical operation
KW - Recurrent neural network
KW - Symbol grounding
UR - http://www.scopus.com/inward/record.url?scp=84987935120&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84987935120&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-44778-0_40
DO - 10.1007/978-3-319-44778-0_40
M3 - Conference contribution
AN - SCOPUS:84987935120
SN - 9783319447773
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 339
EP - 346
BT - Artificial Neural Networks and Machine Learning - 25th International Conference on Artificial Neural Networks, ICANN 2016, Proceedings
A2 - Villa, Alessandro E.P.
A2 - Masulli, Paolo
A2 - Rivero, Antonio Javier Pons
PB - Springer Verlag
T2 - 25th International Conference on Artificial Neural Networks, ICANN 2016
Y2 - 6 September 2016 through 9 September 2016
ER -