Accurate, dense 3D reconstruction of moving and still objects from dynamic stereo sequences based on Temporal Modified-RANSAC and feature-cut

Naotomo Tatematsu*, Jun Ohya

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

This paper improves the authors' conventional method for reconstructing the 3D structure of moving and still objects that are tracked in the video and/or depth image sequences acquired by moving cameras and/or range finder. The authors proposed a Temporal Modified-RANSAC based method [1] that (1) can discriminate each moving object from the still background in color image and depth image sequences acquired by moving stereo cameras or moving range finder, (2) can compute the stereo cameras' egomotion, (3) can compute the motion of each moving object, and (4) can reconstruct the 3D structure of each moving object and the background. However, the TMR based method has the following two problems concerning the 3D reconstruction: lack of accuracy of segmenting into each object's region and sparse 3D reconstructed points in each object's region. To solve these problems of our conventional method, this paper proposes a new 3D segmentation method that utilizes Graph-cut, which is frequently used for segmentation tasks. First, the proposed method tracks feature points in the color and depth image sequences so that 3D optical flows of the feature points in every N frames are obtained. Then, TMR classifies all the obtained 3D optical flows into regions (3D flow set) for the background and each moving object; simultaneously, the rotation matrix and the translation vector for each 3D flow set are computed. Next, Graph-Cut using the energy function that consists of color probability, structure probability and a-priori probability is performed so that pixels in each frame are segmented into object regions and the background region. Finally, 3D point clouds are obtained from the segmentation result image and depth image, and then the point clouds are merged using the rotation and translation from the N-th frame prior to the current frame so that 3D models for the background and each moving object are constructed with dense 3D point data.

Original languageEnglish
Title of host publicationProceedings of SPIE-IS and T Electronic Imaging - Intelligent Robots and Computer Vision XXIX
Subtitle of host publicationAlgorithms and Techniques
DOIs
Publication statusPublished - 2012
EventIntelligent Robots and Computer Vision XXIX: Algorithms and Techniques - Burlingame, CA, United States
Duration: 2012 Jan 232012 Jan 24

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
Volume8301
ISSN (Print)0277-786X

Conference

ConferenceIntelligent Robots and Computer Vision XXIX: Algorithms and Techniques
Country/TerritoryUnited States
CityBurlingame, CA
Period12/1/2312/1/24

Keywords

  • 3D-reconstruction
  • Detect multiple moving objects
  • Egomotion
  • Temporal Modified-RANSAC

ASJC Scopus subject areas

  • Electronic, Optical and Magnetic Materials
  • Condensed Matter Physics
  • Computer Science Applications
  • Applied Mathematics
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Accurate, dense 3D reconstruction of moving and still objects from dynamic stereo sequences based on Temporal Modified-RANSAC and feature-cut'. Together they form a unique fingerprint.

Cite this