TY - JOUR
T1 - Learning Bidirectional Translation Between Descriptions and Actions With Small Paired Data
AU - Toyoda, Minori
AU - Suzuki, Kanata
AU - Hayashi, Yoshihiko
AU - Ogata, Tetsuya
N1 - Funding Information:
This work was supported by JST [Moonshot R&D] underGrant JPMJMS2031
Publisher Copyright:
© 2022 IEEE.
PY - 2022/10/1
Y1 - 2022/10/1
N2 - This study achieved bidirectional translation between descriptions and actions using small paired data from different modalities. The ability to mutually generate descriptions and actions is essential for robots to collaborate with humans in their daily lives, which generally requires a large dataset that maintains comprehensive pairs of both modality data. However, a paired dataset is expensive to construct and difficult to collect. To address this issue, this study proposes a two-stage training method for bidirectional translation. In the proposed method, we train recurrent autoencoders (RAEs) for descriptions and actions with a large amount of non-paired data. Then, we fine-tune the entire model to bind their intermediate representations using small paired data. Because the data used for pre-training do not require pairing, behavior-only data or a large language corpus can be used. We experimentally evaluated our method using a paired dataset consisting of motion-captured actions and descriptions. The results showed that our method performed well, even when the amount of paired data to train was small. The visualization of the intermediate representations of each RAE showed that similar actions were encoded in a clustered position and the corresponding feature vectors were well aligned.
AB - This study achieved bidirectional translation between descriptions and actions using small paired data from different modalities. The ability to mutually generate descriptions and actions is essential for robots to collaborate with humans in their daily lives, which generally requires a large dataset that maintains comprehensive pairs of both modality data. However, a paired dataset is expensive to construct and difficult to collect. To address this issue, this study proposes a two-stage training method for bidirectional translation. In the proposed method, we train recurrent autoencoders (RAEs) for descriptions and actions with a large amount of non-paired data. Then, we fine-tune the entire model to bind their intermediate representations using small paired data. Because the data used for pre-training do not require pairing, behavior-only data or a large language corpus can be used. We experimentally evaluated our method using a paired dataset consisting of motion-captured actions and descriptions. The results showed that our method performed well, even when the amount of paired data to train was small. The visualization of the intermediate representations of each RAE showed that similar actions were encoded in a clustered position and the corresponding feature vectors were well aligned.
KW - Embodied cognitive science
KW - learning from experience
KW - representation learning
UR - http://www.scopus.com/inward/record.url?scp=85135735485&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85135735485&partnerID=8YFLogxK
U2 - 10.1109/LRA.2022.3196159
DO - 10.1109/LRA.2022.3196159
M3 - Article
AN - SCOPUS:85135735485
SN - 2377-3766
VL - 7
SP - 10930
EP - 10937
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 4
ER -