TY - GEN
T1 - Classification in Japanese Sign Language Based on Dynamic Facial Expressions
AU - Tatsumi, Yui
AU - Tanaka, Shoko
AU - Akamatsu, Shunsuke
AU - Shindo, Takahiro
AU - Watanabe, Hiroshi
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Sign language is a visual language expressed through hand movements and non-manual markers. Non-manual markers include facial expressions and head movements. These expressions vary across different nations. Therefore, specialized analysis methods for each sign language are necessary. However, research on Japanese Sign Language (JSL) recognition is limited due to a lack of datasets. The development of recognition models that consider both manual and non-manual features of JSL is crucial for precise and smooth communication with deaf individuals. In JSL, sentence types such as affirmative statements and questions are distinguished by facial expressions. In this paper, we propose a JSL recognition method that focuses on facial expressions. Our proposed method utilizes a neural network to analyze facial features and classify sentence types. Through the experiments, we confirm our method's effectiveness by achieving a classification accuracy of 96.05%.
AB - Sign language is a visual language expressed through hand movements and non-manual markers. Non-manual markers include facial expressions and head movements. These expressions vary across different nations. Therefore, specialized analysis methods for each sign language are necessary. However, research on Japanese Sign Language (JSL) recognition is limited due to a lack of datasets. The development of recognition models that consider both manual and non-manual features of JSL is crucial for precise and smooth communication with deaf individuals. In JSL, sentence types such as affirmative statements and questions are distinguished by facial expressions. In this paper, we propose a JSL recognition method that focuses on facial expressions. Our proposed method utilizes a neural network to analyze facial features and classify sentence types. Through the experiments, we confirm our method's effectiveness by achieving a classification accuracy of 96.05%.
KW - facial expressions
KW - Japanese Sign Language
KW - pose estimation
KW - sign language
UR - http://www.scopus.com/inward/record.url?scp=85213384382&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85213384382&partnerID=8YFLogxK
U2 - 10.1109/GCCE62371.2024.10760997
DO - 10.1109/GCCE62371.2024.10760997
M3 - Conference contribution
AN - SCOPUS:85213384382
T3 - GCCE 2024 - 2024 IEEE 13th Global Conference on Consumer Electronics
SP - 986
EP - 987
BT - GCCE 2024 - 2024 IEEE 13th Global Conference on Consumer Electronics
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 13th IEEE Global Conference on Consumer Electronic, GCCE 2024
Y2 - 29 October 2024 through 1 November 2024
ER -