Augmented Cross-modality: Translating the Physiological Responses, Knowledge and Impression to Audio-visual Information in Virtual Reality

Yutaro Hirao, Takashi Kawai

研究成果: Article査読

1 被引用数 (Scopus)

抄録

This paper proposes the method of interaction design to present haptic experience as intended in virtual reality (VR). The method that we named “Augmented Cross-Modality” is to translate the physiological responses, knowledge and impression about the experience in real world into audio-visual stimuli and add them to the interaction in VR. In this study, as expressions for presenting a haptic experience of gripping an object strongly and lifting a heavy object, we design hand tremor, strong gripping and increasing heart rate in VR. The objective is, at first, to enhance a sense of strain of a body with these augmented cross-modal expressions and then, change the quality of the total haptic experience and as a result, make it closer to the experience of lifting a heavy object. This method is evaluated by several rating scales, interviews and force sensors attached to a VR controller. The result suggests that the expressions of this method enhancing a haptic experience of strong gripping in almost all participants and the effectiveness were confirmed. c 2018 Society for Imaging Science and Technology.

本文言語English
論文番号060402
ジャーナルJournal of Imaging Science and Technology
62
6
DOI
出版ステータスPublished - 2018 11月

ASJC Scopus subject areas

  • 電子材料、光学材料、および磁性材料
  • 原子分子物理学および光学
  • 化学 (全般)
  • コンピュータ サイエンスの応用

フィンガープリント

「Augmented Cross-modality: Translating the Physiological Responses, Knowledge and Impression to Audio-visual Information in Virtual Reality」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル