Dan on the field

Robust Vision

Computer vision is a broad and significant ongoing research challenge, even when performed on an individual image or on streaming video from a high-quality stationary camera with abundant computational resources. When faced with streaming video from a lower-quality, rapidly, jerkily-moving camera and limited computational resources, the challenge only increases. In this paper we present our implementation of a real-time vision system on a mobile robot platform that uses a camera image as the primary sensory input. The constraints imposed on the problem as a result of having to perform all processing, including segmentation and object detection, in real-time on-board the robot eliminate the possibility of using some state-of-the-art methods that otherwise might apply. We present the methods that we developed to achieve a practical vision system within these constraints. Our approach is fully implemented and tested on a team of Sony AIBO robots, enabling them to place among the top finishers at an annual international robot soccer competition.

Here we provide a set of videos that show the output at various stages of the algorithm which we describe in our paper. The original sequence was captured using the robot and then the it was post-processed offline, using the same algorithm as on the robot, to generate the other videos. The sequences were generated at half the original image resolution because of memory constraints on the robot. List of Image seqences (shown below):

Full details of our approach are available in the following paper:

Valid CSS!
Valid XHTML 1.0!