TY - JOUR
T1 - Spatio-Temporal Feature Encoding for Traffic Accident Detection in VANET Environment
AU - Zhou, Zhili
AU - Dong, Xiaohua
AU - Li, Zhetao
AU - Yu, Keping
AU - Ding, Chun
AU - Yang, Yimin
N1 - Funding Information:
This work was supported in part by the National Natural Science Foundation of China under Grant 61972205, Grant 62032020, and Grant 62122032; in part by the Hunan Science and Technology Planning Project under Grant 2019RS3019; in part by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD) Fund; and in part by the Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET) Fund.
Publisher Copyright:
© 2000-2011 IEEE.
PY - 2022/10/1
Y1 - 2022/10/1
N2 - In the Vehicular Ad hoc Networks (VANET) environment, recognizing traffic accident events in the driving videos captured by vehicle-mounted cameras is an essential task. Generally, traffic accidents have a short duration in driving videos, and the backgrounds of driving videos are dynamic and complex. These make traffic accident detection quite challenging. To effectively and efficiently detect accidents from the driving videos, we propose an accident detection approach based on spatio-temporal feature encoding with a multilayer neural network. Specifically, the multilayer neural network is used to encode the temporal features of video for clustering the video frames. From the obtained frame clusters, we detect the border frames as the potential accident frames. Then, we capture and encode the spatial relationships of the objects detected from these potential accident frames to confirm whether these frames are accident frames. The extensive experiments demonstrate that the proposed approach achieves promising detection accuracy and efficiency for traffic accident detection, and meets the real-time detection requirement in the VANET environment.
AB - In the Vehicular Ad hoc Networks (VANET) environment, recognizing traffic accident events in the driving videos captured by vehicle-mounted cameras is an essential task. Generally, traffic accidents have a short duration in driving videos, and the backgrounds of driving videos are dynamic and complex. These make traffic accident detection quite challenging. To effectively and efficiently detect accidents from the driving videos, we propose an accident detection approach based on spatio-temporal feature encoding with a multilayer neural network. Specifically, the multilayer neural network is used to encode the temporal features of video for clustering the video frames. From the obtained frame clusters, we detect the border frames as the potential accident frames. Then, we capture and encode the spatial relationships of the objects detected from these potential accident frames to confirm whether these frames are accident frames. The extensive experiments demonstrate that the proposed approach achieves promising detection accuracy and efficiency for traffic accident detection, and meets the real-time detection requirement in the VANET environment.
KW - Neural network
KW - VANETs
KW - security communication
KW - traffic accident detection
KW - traffic safety
UR - http://www.scopus.com/inward/record.url?scp=85124824643&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85124824643&partnerID=8YFLogxK
U2 - 10.1109/TITS.2022.3147826
DO - 10.1109/TITS.2022.3147826
M3 - Article
AN - SCOPUS:85124824643
SN - 1524-9050
VL - 23
SP - 19772
EP - 19781
JO - IEEE Transactions on Intelligent Transportation Systems
JF - IEEE Transactions on Intelligent Transportation Systems
IS - 10
ER -