Sciweavers

ICRA
2007
IEEE

A Comparison of Two Approaches for Vision and Self-Localization on a Mobile Robot

13 years 10 months ago
A Comparison of Two Approaches for Vision and Self-Localization on a Mobile Robot
Abstract— This paper considers two approaches to the problem of vision and self-localization on a mobile robot. In the first approach, the perceptual processing is primarily bottom-up, with visual object recognition entirely preceding localization. In the second, significant top-down information is incorporated, with vision and localization being intertwined. That is, the processing of vision is highly dependent on the robot’s estimate of its location. The two approaches are implemented and tested on a Sony Aibo ERS-7 robot, localizing as it walks through a color-coded test-bed domain. This paper’s contributions are an exposition of two different approaches to vision and localization on a mobile robot, an empirical comparison of the two methods, and a discussion of the relative advantages of each method.
Daniel Stronger, Peter Stone
Added 03 Jun 2010
Updated 03 Jun 2010
Type Conference
Year 2007
Where ICRA
Authors Daniel Stronger, Peter Stone
Comments (0)