Abstract
In robot navigation, one of the important and fundamental issues is to find positions of landmarks or vision sensors located around the robot. This paper proposes a method for reconstructing qualitative positions of multiple vision sensors from qualitative information observed by the vision sensors, i.e., motion directions of moving objects. In order to directly acquire the qualitative positions of points, the method proposed in this paper iterates the following steps: 1) observing motion directions (left or right) of moving objects with the vision sensors, 2) classifying the vision sensors into spatially classified pairs based on the motion directions, 3) acquiring three point constraints, and 4) propagating the constraints. Compared with the previous methods, which reconstruct the environment structure from quantitative measurements and acquire qualitative representations by abstracting it, this paper focuses on how to acquire qualitative positions of landmarks from low-level, simple, and reliable information (that is, "qualitative"). The method been evaluated with simulations and also verified with observation errors.
Original language | English |
---|---|
Pages (from-to) | 268-278 |
Number of pages | 11 |
Journal | IEEE Transactions on Pattern Analysis and Machine Intelligence |
Volume | 23 |
Issue number | 3 |
DOIs | |
Publication status | Published - 2001 Mar |
Externally published | Yes |
Keywords
- Constraint propagation
- Map building
- Qualitative observation
- Qualitative spatial representation
- Spatially classified pair
- Three point constraint
ASJC Scopus subject areas
- Software
- Computer Vision and Pattern Recognition
- Computational Theory and Mathematics
- Artificial Intelligence
- Applied Mathematics