TY - GEN
T1 - Driver's Drowsiness Classifier using a Single-Camera Robust to Mask-wearing Situations using an Eyelid, Lower-Face Contour, and Chest Movement Feature Vector GRU-based Model
AU - Lollett, Catherine
AU - Kamezaki, Mitsuhiro
AU - Sugano, Shigeki
N1 - Funding Information:
ACKNOWLEDGMENT The authors would like to thank the Driving Interface Team of Sugano’s Laboratory in Waseda University, to all the subjects for the support given and the Research Institute for Science and Engineering of Waseda University.
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Drowsy drivers cause many deadly crashes. As a result, researchers focus on using driver drowsiness classifiers to predict this condition in advance. However, they only consider constraint situations. Under highly unrestricted scenarios, this categorization remains extremely difficult. For example, several studies consider the driver's mouth closure crucial for detecting drowsiness. However, the mouth closure cannot be seen when the driver wears a mask, which is a potential failure for these classifiers. Moreover, these works do not make experiments under unconstrained situations as environments with considerable light variation or a driver with eyeglasses reflections. As a result, this paper proposes a video-based novel pipeline that employs new parameters, computer vision and deep-learning techniques to identify drowsiness in drivers under unconstrained situations. First, we alter the Lab color space of the frame to ease strong light changes. Then, we achieve a robust recognition of the face, eyes and body-joints landmarks using dense landmark detection that includes optical flow estimation methods for 3D eyelid and facial expression movement tracking and an online optimization framework to build the association of cross-frame poses. After this, we consider three important landmarks: eyes, lower-face contour, and chest. We performed several pre-processing and combinations using these landmarks to compare the efficiency of three alternative feature vectors. Finally, we fuse spatiotemporal features using a Gated Recurrent Units (GRU) model. Results over a dataset with highly unconstrained driving conditions demonstrate that our method outperforms classifying the driver's drowsiness correctly in various challenging situations, all under mask-wearing scenarios.
AB - Drowsy drivers cause many deadly crashes. As a result, researchers focus on using driver drowsiness classifiers to predict this condition in advance. However, they only consider constraint situations. Under highly unrestricted scenarios, this categorization remains extremely difficult. For example, several studies consider the driver's mouth closure crucial for detecting drowsiness. However, the mouth closure cannot be seen when the driver wears a mask, which is a potential failure for these classifiers. Moreover, these works do not make experiments under unconstrained situations as environments with considerable light variation or a driver with eyeglasses reflections. As a result, this paper proposes a video-based novel pipeline that employs new parameters, computer vision and deep-learning techniques to identify drowsiness in drivers under unconstrained situations. First, we alter the Lab color space of the frame to ease strong light changes. Then, we achieve a robust recognition of the face, eyes and body-joints landmarks using dense landmark detection that includes optical flow estimation methods for 3D eyelid and facial expression movement tracking and an online optimization framework to build the association of cross-frame poses. After this, we consider three important landmarks: eyes, lower-face contour, and chest. We performed several pre-processing and combinations using these landmarks to compare the efficiency of three alternative feature vectors. Finally, we fuse spatiotemporal features using a Gated Recurrent Units (GRU) model. Results over a dataset with highly unconstrained driving conditions demonstrate that our method outperforms classifying the driver's drowsiness correctly in various challenging situations, all under mask-wearing scenarios.
UR - http://www.scopus.com/inward/record.url?scp=85135369600&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85135369600&partnerID=8YFLogxK
U2 - 10.1109/IV51971.2022.9827229
DO - 10.1109/IV51971.2022.9827229
M3 - Conference contribution
AN - SCOPUS:85135369600
T3 - IEEE Intelligent Vehicles Symposium, Proceedings
SP - 519
EP - 526
BT - 2022 IEEE Intelligent Vehicles Symposium, IV 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 IEEE Intelligent Vehicles Symposium, IV 2022
Y2 - 5 June 2022 through 9 June 2022
ER -