The UT Austin Team

Autonomous Sensor and Actuator Model Induction

One of the challenges we face in the legged league is that every time we change walk parameters or playing surfaces our robots odometries change. Additionally, the apparent size of a landmark may vary unpredictably as a function of the robot's distance from it. We needed a way to autonomously learn the odometry (action model) and the apparent size of landmarks (sensor model). Doing it by hand is tedious, time-consuming, and often leads to errors.

To this end, we developed a technique for Autonomous Sensor and Actuator Model Induction (ASAMI). While previous approaches to model learning make use of reliable feedback or a separate accurate sensor, ASAMI is unsupervised, in that it does not receive any well calibrated feedback about its location. Starting with only an inaccurate action model, it learns accurate relative action and sensor models. Furthermore, ASAMI is fully autonomous, in that it operates with no human supervision. We fully implemented ASAMI on our Aibo ERS-7 robots.

Action and sensor model training video.
Experiment in Action

(4.2 MB MPEG)

In this video of the Aibo performing ASAMI, the Aibo alternates between phases in which it walks toward and away from the beacon. In the forward phase, it randomly chooses a new forward-moving action command every three seconds. In the backwards phase it chooses randomly among backward-moving actions. After about 2 1/2 minutes of doing so, it is able to acquire accurate and correlated models mapping its action command to velocity and the size of the beacon in its image plane to the distance from the beacon.

This work represents an exciting starting point towards the long-term challenge of enabling fully autonomous calibration of complex, multi-modal sensor and action models on mobile robots.

Full details of our approach are available in the following two papers:

Valid CSS!
Valid XHTML 1.0!