A Comparison of Two Approaches for Vision and Self-Localization on a Mobile Robot (2007)
This paper considers two approaches to the problem of vision and self-localization on a mobile robot. In the first approach, the perceptual processing is primarily bottom-up, with visual object recognition entirely preceding localization. In the second, significant top-down information is incorporated, with vision and localization being intertwined. That is, the processing of vision is highly dependent on the robot's estimate of its location. The two approaches are implemented and tested on a Sony Aibo ERS-7 robot, localizing as it walks through a color-coded test-bed domain. This paper's contributions are an exposition of two different approaches to vision and localization on a mobile robot, an empirical comparison of the two methods, and a discussion of the relative advantages of each method.
View:
PDF, PS, HTML
Citation:
In IEEE International Conference on Robotics and Automation, pp. 3915-3920, April 2007.
Bibtex:

Peter Stone Faculty pstone [at] cs utexas edu
Daniel Stronger Ph.D. Alumni dan stronger [at] gmail com