抄録
A multimodal interface provides multiple modalities for input and output, such as speech, eye gaze and facial expression. With the recent progresses in multimodal interfaces, various approaches about multimodal input fusion and output generation have been proposed. However, less attention has been paid to how to integrate them together in a multimodal input and output system. This paper proposes an approach, termed as THE HINGE, in providing agent-based multimodal presentations in accordance with multimodal input fusion results. The analysis of experiment result shows the proposed approach enhances the flexibility of the system while maintains its stability.
本文言語 | English |
---|---|
ホスト出版物のタイトル | Conference on Human Factors in Computing Systems - Proceedings |
ページ | 3483-3488 |
ページ数 | 6 |
DOI | |
出版ステータス | Published - 2008 |
外部発表 | はい |
イベント | 28th Annual CHI Conference on Human Factors in Computing Systems - Florence 継続期間: 2008 4月 5 → 2008 4月 10 |
Other
Other | 28th Annual CHI Conference on Human Factors in Computing Systems |
---|---|
City | Florence |
Period | 08/4/5 → 08/4/10 |
ASJC Scopus subject areas
- 人間とコンピュータの相互作用
- コンピュータ グラフィックスおよびコンピュータ支援設計
- ソフトウェア