Augmented cross-modality: Translating the physiological responses, knowledge and impression to audio-visual information in virtual reality

Yutaro Hirao*, Takashi Kawai

*この研究の対応する著者

研究成果: Conference article査読

4 被引用数 (Scopus)

抄録

This paper proposes the method of interaction design to present haptic experience as intended in virtual reality (VR). The method that we named "Augmented Cross-Modality" is to translate the physiological responses, knowledge and impression about the experience in real world into audio-visual stimuli and add them to the interaction in VR. In this study, as expressions for presenting a haptic experience of gripping an object strongly and lifting a heavy object, we design hand tremor, strong gripping and increasing heart rate in VR. The objective is, at first, to enhance a sense of strain of a body with these augmented cross-modal expressions and then, change the quality of the total haptic experience and as a result, make it closer to the experience of lifting a heavy object. This method is evaluated by several rating scales, interviews and force sensors attached to a VR controller. The result suggests that the expressions of this method enhancing a haptic experience of strong gripping in almost all participants and the effectiveness were confirmed.

本文言語English
論文番号060402
ジャーナルIS and T International Symposium on Electronic Imaging Science and Technology
2019
2
DOI
出版ステータスPublished - 2019 1月 13
イベント2019 Conference on Engineering Reality of Virtual Reality, ERVR 2019 - Burlingame, United States
継続期間: 2019 1月 132019 1月 17

ASJC Scopus subject areas

  • コンピュータ グラフィックスおよびコンピュータ支援設計
  • コンピュータ サイエンスの応用
  • 人間とコンピュータの相互作用
  • ソフトウェア
  • 電子工学および電気工学
  • 原子分子物理学および光学

フィンガープリント

「Augmented cross-modality: Translating the physiological responses, knowledge and impression to audio-visual information in virtual reality」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル