抄録
In this paper we introduce a system that automatically adds different types of non-verbal behavior to a given dialogue script between two virtual embodied agents. It allows us to transform a dialogue in text format into an agent behavior script enriched by eye gaze and conversational gesture behavior. The agents' gaze behavior is informed by theories of human face-to-face gaze behavior. Gestures are generated based on the analysis of linguistic and contextual information of the input text. The resulting annotated dialogue script is then transformed into the Multimodal Presentation Markup Language for 3D agents (MPML3D), which controls the multi-modal behavior of animated life-like agents, including facial and body animation and synthetic speech. Using our system makes it very easy to add appropriate non-verbal behavior to a given dialogue text, a task that would otherwise be very cumbersome and time consuming.
本文言語 | English |
---|---|
ホスト出版物のタイトル | Proceedings of the 9th International Conference on Multimodal Interfaces, ICMI'07 |
ページ | 319-322 |
ページ数 | 4 |
DOI | |
出版ステータス | Published - 2007 |
外部発表 | はい |
イベント | 9th International Conference on Multimodal Interfaces, ICMI 2007 - Nagoya 継続期間: 2007 11月 12 → 2007 11月 15 |
Other
Other | 9th International Conference on Multimodal Interfaces, ICMI 2007 |
---|---|
City | Nagoya |
Period | 07/11/12 → 07/11/15 |
ASJC Scopus subject areas
- 人工知能
- コンピュータ グラフィックスおよびコンピュータ支援設計
- コンピュータ ビジョンおよびパターン認識
- 人間とコンピュータの相互作用