Automated generation of non-verbal behavior for virtual embodied characters

Werner Breitfuss*, Helmut Prendinger, Mitsuru Ishizuka

*この研究の対応する著者

研究成果: Conference contribution

17 被引用数 (Scopus)

抄録

In this paper we introduce a system that automatically adds different types of non-verbal behavior to a given dialogue script between two virtual embodied agents. It allows us to transform a dialogue in text format into an agent behavior script enriched by eye gaze and conversational gesture behavior. The agents' gaze behavior is informed by theories of human face-to-face gaze behavior. Gestures are generated based on the analysis of linguistic and contextual information of the input text. The resulting annotated dialogue script is then transformed into the Multimodal Presentation Markup Language for 3D agents (MPML3D), which controls the multi-modal behavior of animated life-like agents, including facial and body animation and synthetic speech. Using our system makes it very easy to add appropriate non-verbal behavior to a given dialogue text, a task that would otherwise be very cumbersome and time consuming.

本文言語English
ホスト出版物のタイトルProceedings of the 9th International Conference on Multimodal Interfaces, ICMI'07
ページ319-322
ページ数4
DOI
出版ステータスPublished - 2007
外部発表はい
イベント9th International Conference on Multimodal Interfaces, ICMI 2007 - Nagoya
継続期間: 2007 11月 122007 11月 15

Other

Other9th International Conference on Multimodal Interfaces, ICMI 2007
CityNagoya
Period07/11/1207/11/15

ASJC Scopus subject areas

  • 人工知能
  • コンピュータ グラフィックスおよびコンピュータ支援設計
  • コンピュータ ビジョンおよびパターン認識
  • 人間とコンピュータの相互作用

フィンガープリント

「Automated generation of non-verbal behavior for virtual embodied characters」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル