Agent-typed multimodal interface using speech, pointing gestures and CG

Haru Ando*, Hideaki Kikuchi, Nobuo Hataoka

*この研究の対応する著者

研究成果: Article査読

5 被引用数 (Scopus)

抄録

This paper proposes a sophisticated agent-typed user interface using speech, pointing gestures and CG technologies. An "Agent-typed Interior Design System" has been implemented as a prototype for evaluating the proposed agent-typed interface, which has speech and pointing gestures as input modalities, and in which the agent is realized by 3 dimensional CG (3-D CG) and speech guidance. In this paper, the details of system implementation and evaluation results, which clarified the effectiveness of the agent-typed interface, are described.

本文言語English
ページ(範囲)29-34
ページ数6
ジャーナルAdvances in Human Factors/Ergonomics
20
C
DOI
出版ステータスPublished - 1995
外部発表はい

ASJC Scopus subject areas

  • 人的要因と人間工学
  • 社会科学(その他)

フィンガープリント

「Agent-typed multimodal interface using speech, pointing gestures and CG」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル