Embodying Pre-Trained Word Embeddings through Robot Actions

Minori Toyoda, Kanata Suzuki, Hiroki Mori, Yoshihiko Hayashi, Tetsuya Ogata

研究成果: Article査読

5 被引用数 (Scopus)

抄録

We propose a promising neural network model with which to acquire a grounded representation of robot actions and the linguistic descriptions thereof. Properly responding to various linguistic expressions, including polysemous words, is an important ability for robots that interact with people via linguistic dialogue. Previous studies have shown that robots can use words that are not included in the action-description paired datasets by using pre-trained word embeddings. However, the word embeddings trained under the distributional hypothesis are not grounded, as they are derived purely from a text corpus. In this letter, we transform the pre-trained word embeddings to embodied ones by using the robot's sensory-motor experiences. We extend a bidirectional translation model for actions and descriptions by incorporating non-linear layers that retrofit the word embeddings. By training the retrofit layer and the bidirectional translation model alternately, our proposed model is able to transform the pre-trained word embeddings to adapt to a paired action-description dataset. Our results demonstrate that the embeddings of synonyms form a semantic cluster by reflecting the experiences (actions and environments) of a robot. These embeddings allow the robot to properly generate actions from unseen words that are not paired with actions in a dataset.

本文言語English
論文番号9384172
ページ(範囲)4225-4232
ページ数8
ジャーナルIEEE Robotics and Automation Letters
6
2
DOI
出版ステータスPublished - 2021 4月

ASJC Scopus subject areas

  • 制御およびシステム工学
  • 生体医工学
  • 人間とコンピュータの相互作用
  • 機械工学
  • コンピュータ ビジョンおよびパターン認識
  • コンピュータ サイエンスの応用
  • 制御と最適化
  • 人工知能

フィンガープリント

「Embodying Pre-Trained Word Embeddings through Robot Actions」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル