Spatio-Temporal Feature Encoding for Traffic Accident Detection in VANET Environment

Zhili Zhou*, Xiaohua Dong, Zhetao Li, Keping Yu, Chun Ding, Yimin Yang

*この研究の対応する著者

研究成果: Article査読

70 被引用数 (Scopus)

抄録

In the Vehicular Ad hoc Networks (VANET) environment, recognizing traffic accident events in the driving videos captured by vehicle-mounted cameras is an essential task. Generally, traffic accidents have a short duration in driving videos, and the backgrounds of driving videos are dynamic and complex. These make traffic accident detection quite challenging. To effectively and efficiently detect accidents from the driving videos, we propose an accident detection approach based on spatio-temporal feature encoding with a multilayer neural network. Specifically, the multilayer neural network is used to encode the temporal features of video for clustering the video frames. From the obtained frame clusters, we detect the border frames as the potential accident frames. Then, we capture and encode the spatial relationships of the objects detected from these potential accident frames to confirm whether these frames are accident frames. The extensive experiments demonstrate that the proposed approach achieves promising detection accuracy and efficiency for traffic accident detection, and meets the real-time detection requirement in the VANET environment.

本文言語English
ページ(範囲)19772-19781
ページ数10
ジャーナルIEEE Transactions on Intelligent Transportation Systems
23
10
DOI
出版ステータスPublished - 2022 10月 1
外部発表はい

ASJC Scopus subject areas

  • 自動車工学
  • 機械工学
  • コンピュータ サイエンスの応用

フィンガープリント

「Spatio-Temporal Feature Encoding for Traffic Accident Detection in VANET Environment」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル