TY - JOUR
T1 - Adaptive Drawing Behavior by Visuomotor Learning Using Recurrent Neural Networks
AU - Sasaki, Kazuma
AU - Ogata, Tetsuya
N1 - Funding Information:
Manuscript received December 8, 2017; revised June 3, 2018 and August 13, 2018; accepted August 20, 2018. Date of publication September 3, 2018; date of current version March 11, 2019. This work was supported in part by JST CREST under Grant JPMJCR15E3, in part by MEXT Grant-in-Aid for Scientific Research under Grant 15H01710, and in part by the Program for Leading Graduate Schools, “Graduate Program for Embodiment Informatics” of the Ministry of Education, Culture, Sports, Science, and Technology. (Corresponding author: Kazuma Sasaki.) The authors are with the Graduate School of Fundamental Science and Engineering, Waseda University, Shinjuku 169-8050, Japan (e-mail: ssk.sasaki@suou.waseda.jp).
Publisher Copyright:
© 2016 IEEE.
PY - 2019/3
Y1 - 2019/3
N2 - Drawing is a medium that represents an idea as drawn lines, and drawing behavior requires complex cognitive abilities to process visual and motor information. One way to understand aspects of these abilities is constructing computational models that can replicate these abilities rather than explaining the phenomena by building plausible models by a top-down manner. In this paper, we proposed a supervised learning model that can be trained using examples of visuomotor sequences from drawings made by human. Additionally, we demonstrated that the proposed model has functions of: 1) associating motions to depict the given picture image and 2) adapting to drawing behavior to complete a given part of the drawing process. This dynamical model is implemented by recurrent neural networks that have images and motion as their input and output. Through experiments that involved learning human drawing sequences, the model was able to associate appropriate motions to achieve depiction targets while adapting to a given part of the drawing process. Furthermore, we demonstrate that including visual information in the model improved performance robustness against noisy lines in the input data.
AB - Drawing is a medium that represents an idea as drawn lines, and drawing behavior requires complex cognitive abilities to process visual and motor information. One way to understand aspects of these abilities is constructing computational models that can replicate these abilities rather than explaining the phenomena by building plausible models by a top-down manner. In this paper, we proposed a supervised learning model that can be trained using examples of visuomotor sequences from drawings made by human. Additionally, we demonstrated that the proposed model has functions of: 1) associating motions to depict the given picture image and 2) adapting to drawing behavior to complete a given part of the drawing process. This dynamical model is implemented by recurrent neural networks that have images and motion as their input and output. Through experiments that involved learning human drawing sequences, the model was able to associate appropriate motions to achieve depiction targets while adapting to a given part of the drawing process. Furthermore, we demonstrate that including visual information in the model improved performance robustness against noisy lines in the input data.
KW - Adaptation
KW - drawing ability
KW - recurrent neural networks
KW - visuomotor learning
UR - http://www.scopus.com/inward/record.url?scp=85052840258&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85052840258&partnerID=8YFLogxK
U2 - 10.1109/TCDS.2018.2868160
DO - 10.1109/TCDS.2018.2868160
M3 - Article
AN - SCOPUS:85052840258
SN - 2379-8920
VL - 11
SP - 119
EP - 128
JO - IEEE Transactions on Cognitive and Developmental Systems
JF - IEEE Transactions on Cognitive and Developmental Systems
IS - 1
M1 - 8453841
ER -