TY - GEN
T1 - Normalized Facial Features-Based DNN for a Driver's Gaze Zone Classifier Using a Single Camera Robust to Various Highly Challenging Driving Scenarios
AU - Lollett, Catherine
AU - Kamezaki, Mitsuhiro
AU - Sugano, Shigeki
N1 - Funding Information:
ACKNOWLEDGMENT The authors would like to thank the Driving Interface Team of Sugano’s Laboratory in Waseda University, to all the subjects for the support given and to the Research Institute for Science and Engineering of Waseda University.
Funding Information:
The authors would like to thank the Driving Interface Team of Sugano's Laboratory in Waseda University, to all the subjects for the support given and to the Research Institute for Science and Engineering of Waseda University.
Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Driver inattention is a significant contributor to fatal car crashes, leading to a need for accurate driver gaze zone classification methods. However, these classifications are particularly challenging under unconstrained conditions, such as when a driver's face is partially occluded by masks or scarves, when the environment has significant lighting differences, or when the driver's eyeglasses have reflections. This paper presents a framework that addresses these challenges by combining computer vision techniques and different deep-learning models to robustly recognize a driver's gaze zone under highly unconstrained conditions. The framework uses a Contrast-Limited Adaptive Histogram Equalization (CLAHE) to adjust the color space of the frame, making it easier to recognize features under varying light conditions. It then employs dense landmark detection techniques to achieve robust recognition of the face, eyes, and pupils, including the use of optical flow estimation methods for tracking pupil and eyelid movement. The framework considers two facial poses and trains individual Deep Neural Network (DNN) models for each pose. As facial structure varies among individuals, the feature vector parameters for these DNN models are based on different relations between pupil and eye landmarks proportional to the driver's face. The method has demonstrated its outstanding performance under a dataset involving highly unconstrained driving conditions.
AB - Driver inattention is a significant contributor to fatal car crashes, leading to a need for accurate driver gaze zone classification methods. However, these classifications are particularly challenging under unconstrained conditions, such as when a driver's face is partially occluded by masks or scarves, when the environment has significant lighting differences, or when the driver's eyeglasses have reflections. This paper presents a framework that addresses these challenges by combining computer vision techniques and different deep-learning models to robustly recognize a driver's gaze zone under highly unconstrained conditions. The framework uses a Contrast-Limited Adaptive Histogram Equalization (CLAHE) to adjust the color space of the frame, making it easier to recognize features under varying light conditions. It then employs dense landmark detection techniques to achieve robust recognition of the face, eyes, and pupils, including the use of optical flow estimation methods for tracking pupil and eyelid movement. The framework considers two facial poses and trains individual Deep Neural Network (DNN) models for each pose. As facial structure varies among individuals, the feature vector parameters for these DNN models are based on different relations between pupil and eye landmarks proportional to the driver's face. The method has demonstrated its outstanding performance under a dataset involving highly unconstrained driving conditions.
KW - Advanced Driver Assistance Systems
KW - Driver Monitoring Systems
KW - Gaze Classification
UR - http://www.scopus.com/inward/record.url?scp=85168010818&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85168010818&partnerID=8YFLogxK
U2 - 10.1109/IV55152.2023.10186697
DO - 10.1109/IV55152.2023.10186697
M3 - Conference contribution
AN - SCOPUS:85168010818
T3 - IEEE Intelligent Vehicles Symposium, Proceedings
BT - IV 2023 - IEEE Intelligent Vehicles Symposium, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 34th IEEE Intelligent Vehicles Symposium, IV 2023
Y2 - 4 June 2023 through 7 June 2023
ER -