抄録
We propose a novel deep learning framework for bidirectional translation between robot actions and their linguistic descriptions. Our model consists of two recurrent autoencoders (RAEs). One RAE learns to encode action sequences as fixed-dimensional vectors in a way that allows the sequences to be reproduced from the vectors by its decoder. The other RAE learns to encode descriptions in a similar way. In the learning process, in addition to reproduction losses, we create another loss function whereby the representations of an action and its corresponding description approach each other in the latent vector space. Across the shared representation, the trained model can produce a linguistic description given a robot action. The model is also able to generate an appropriate action by receiving a linguistic instruction, conditioned on the current visual input. Visualization of the latent representations shows that the robot actions are embedded in a semantically compositional way in the vector space by being learned jointly with descriptions.
本文言語 | English |
---|---|
論文番号 | 8403309 |
ページ(範囲) | 3441-3448 |
ページ数 | 8 |
ジャーナル | IEEE Robotics and Automation Letters |
巻 | 3 |
号 | 4 |
DOI | |
出版ステータス | Published - 2018 10月 |
ASJC Scopus subject areas
- 制御およびシステム工学
- 生体医工学
- 人間とコンピュータの相互作用
- 機械工学
- コンピュータ ビジョンおよびパターン認識
- コンピュータ サイエンスの応用
- 制御と最適化
- 人工知能