An open multi-sensor fusion toolbox for autonomous vehicles

Abraham Monrroy Cano*, Eijiro Takeuchi, Shinpei Kato, Masato Edahiro

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


We present an accurate and easy-to-use multi-sensor fusion toolbox for autonomous vehicles. It includes a ‘target-less’ multi-LiDAR (Light Detection and Ranging), and Camera-LiDAR calibration, sensor fusion, and a fast and accurate point cloud ground classifier. Our calibration methods do not require complex setup procedures, and once the sensors are calibrated, our framework eases the fusion of multiple point clouds, and cameras. In addition we present an original real-time ground-obstacle classifier, which runs on the CPU, and is designed to be used with any type and number of LiDARs. Evaluation results on the KITTI dataset confirm that our calibration method has comparable accuracy with other state-of-the-art contenders in the benchmark.

Original languageEnglish
Pages (from-to)252-264
Number of pages13
JournalIEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
Issue number1
Publication statusPublished - 2020
Externally publishedYes


  • Autonomous driving
  • Calibration
  • Cameras
  • Ground detection
  • LiDAR
  • Sensor fusion

ASJC Scopus subject areas

  • Signal Processing
  • Computer Graphics and Computer-Aided Design
  • Electrical and Electronic Engineering
  • Applied Mathematics


Dive into the research topics of 'An open multi-sensor fusion toolbox for autonomous vehicles'. Together they form a unique fingerprint.

Cite this