Traffic light recognition using high-definition map features

Manato Hirabayashi*, Adi Sujiwo, Abraham Monrroy, Shinpei Kato, Masato Edahiro

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

32 Citations (Scopus)

Abstract

Accurate recognition of traffic lights in public roads is a critical step to deploy automated driving systems. Camera sensors are widely used for the object detection task. It might seem natural to employ them to traffic signal detection. However, images as captured by cameras contain a broad number of unrelated objects, causing a significant reduction in the detection accuracy. This paper presents an innovative, yet reliable method to recognize the state of traffic lights in images. With the help of accurate 3D maps and a self-localization technique in it, elements already being used in autonomous driving systems, we propose a method to improve the traffic light detection accuracy. Using the current location and looking for the traffic signals in the road, we extract the region related only to the traffic light (ROI, region of interest) in images captured by a vehicle-mounted camera, then we feed the ROIs to custom classifiers to recognize the state. Evaluation of our method was carried out in two datasets recorded during our urban public driving experiments, one taken during day light and the other obtained during sunset. The quantitative evaluations indicate that our method achieved over 97% average precision for each state and approximately 90% recall as far as 90 meters under preferable condition.

Original languageEnglish
Pages (from-to)62-72
Number of pages11
JournalRobotics and Autonomous Systems
Volume111
DOIs
Publication statusPublished - 2019 Jan

Keywords

  • Autonomous vehicles
  • Information fusion
  • Vehicle environment perception

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Mathematics(all)
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Traffic light recognition using high-definition map features'. Together they form a unique fingerprint.

Cite this