Vision-based Mobile Robot Localization And Mapping using Scale-Invariant Features

INTRODUCTION: Mobile robot localization and mapping, the process of simultaneously tracking the position of a mobile robot relative to its environment and building a map of the environment, has been a central research topic for
the past few years. Accurate localization is a prerequisite for building a good map, and having an accurate map is essential for good localization. Therefore, Simultaneous Localization And Map Building (SLAMB) is a critical underlying factor for successful mobile robot navigation in a large environment, irrespective of the higher-level goals or applications.

To achieve SLAMB, there are different types of sensor modalities such as sonar, laser range finders and vision. Many early successful approaches utilize artificial landmarks, such as bar-code reflectors, ultrasonic beacons, visual patterns, etc., and therefore do not function properly in beacon-free environments.

Vision-based approaches using stable natural landmarks in unmodified environments are highly desirable for a wide range of applications.

Harris’s 3D vision system DROID uses the visual motion of image corner features for 3D reconstruction. Kalman filters are used for tracking features from which it determines both the camera motion and the 3D positions of the features. It is accurate in the short to medium term, but long-term drifts can occur.

The ego-motion and the perceived 3D structure can be self-consistently in error. It is an incremental algorithm and it runs at near real-time. A stereo vision algorithm for mobile robot mapping and navigation is proposed in, where a 2D occupancy grid map is built from the stereo data. However, since the robot does not localize itself using the map,
odometry error is not corrected and hence the map may drift over time. Proposed combining this 2D occupancy map with sparse 3D landmarks for robot localization, and corners on planar objects are used as stable landmarks. However, landmarks are used for matching only in the next frame but not kept for matching subsequent frames.

Markov localization was employed by various teams with success. For example, the Deutsches Museum Bonn tour-guide robot RHINO  utilizes a metric version of this approach with laser sensors. However, it needs to be supplied with a manually derived map, and cannot learn maps from scratch.

Thrun et al. proposed a probabilistic approach for map building using the Expectation-Maximization (EM) algorithm. The E-step estimates robot locations at various points based on the currently best available map and the M-step estimates a maximum likelihood map based on the locations computed in the E-step.

It searches for the most likely map by simultaneously considering the locations of all past sonar scans. After
traversing a cyclic environment, the algorithm revises estimates backward in time. It is a batch algorithm
and cannot be run in real-time.



Leave a Reply

Your email address will not be published. Required fields are marked *