Practical Vision-Based Monte Carlo Localization on a Legged Robot (2005)
Mobile robot localization, the ability of a robot to determine its global position and orientation, continues to be a major research focus in robotics. In most past cases, such localization has been studied on wheeled robots with range-finding sensors such as sonar or lasers. In this paper, we consider the more challenging scenario of a legged robot localizing with a limited field-of-view camera as its primary sensory input. We begin with a emphbaseline implementation adapted from the literature that provides a reasonable level of competence, but that exhibits some weaknesses in real-world tests. We propose a series of practical enhancements designed to improve the robot's sensory and actuator models that enable our robots to achieve a $50\%$ improvement in localization accuracy over the baseline implementation. We go on to demonstrate how the accuracy improvement is even more dramatic when the robot is subjected to large unmodeled movements. These enhancements are each individually straightforward, but together they provide a roadmap for avoiding potential pitfalls when implementing Monte Carlo Localization on vision-based and/or legged robots.
In IEEE International Conference on Robotics and Automation, April 2005.

Gregory Kuhlmann Ph.D. Alumni kuhlmann [at] cs utexas edu
Mohan Sridharan Ph.D. Alumni mhnsrdhrn [at] gmail com
Peter Stone Faculty pstone [at] cs utexas edu