Classification in Japanese Sign Language Based on Dynamic Facial Expressions

Yui Tatsumi*, Shoko Tanaka, Shunsuke Akamatsu, Takahiro Shindo, Hiroshi Watanabe

*この研究の対応する著者

研究成果: Conference contribution

抄録

Sign language is a visual language expressed through hand movements and non-manual markers. Non-manual markers include facial expressions and head movements. These expressions vary across different nations. Therefore, specialized analysis methods for each sign language are necessary. However, research on Japanese Sign Language (JSL) recognition is limited due to a lack of datasets. The development of recognition models that consider both manual and non-manual features of JSL is crucial for precise and smooth communication with deaf individuals. In JSL, sentence types such as affirmative statements and questions are distinguished by facial expressions. In this paper, we propose a JSL recognition method that focuses on facial expressions. Our proposed method utilizes a neural network to analyze facial features and classify sentence types. Through the experiments, we confirm our method's effectiveness by achieving a classification accuracy of 96.05%.

本文言語English
ホスト出版物のタイトルGCCE 2024 - 2024 IEEE 13th Global Conference on Consumer Electronics
出版社Institute of Electrical and Electronics Engineers Inc.
ページ986-987
ページ数2
ISBN(電子版)9798350355079
DOI
出版ステータスPublished - 2024
イベント13th IEEE Global Conference on Consumer Electronic, GCCE 2024 - Kitakyushu, Japan
継続期間: 2024 10月 292024 11月 1

出版物シリーズ

名前GCCE 2024 - 2024 IEEE 13th Global Conference on Consumer Electronics

Conference

Conference13th IEEE Global Conference on Consumer Electronic, GCCE 2024
国/地域Japan
CityKitakyushu
Period24/10/2924/11/1

ASJC Scopus subject areas

  • 人工知能
  • コンピュータ ビジョンおよびパターン認識
  • 人間とコンピュータの相互作用
  • 信号処理
  • 電子工学および電気工学
  • メディア記述
  • 器械工学

フィンガープリント

「Classification in Japanese Sign Language Based on Dynamic Facial Expressions」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル