David Pierce and Benjamin Kuipers. 1997.
Map learning with uninterpreted sensors and effectors.
Artificial Intelligence 92: 169-229, 1997.


This paper presents a set of methods by which a learning agent can learn a sequence of increasingly abstract and powerful interfaces to control a robot whose sensorimotor apparatus and environment are initially unknown. The result of the learning is a rich hierarchical model of the robot's world (its sensimotor apparatus and environment). The learning methods rely on generic properties of the robot's world such as almost-everywhere smooth effects of motor control signals on sensory features.

At the lowest level of the hierarchy, the learning agent analyzes the effects of its motor control signals in order to define a new set of control signals, one of each of the robot's degrees of freedom. It uses a generate-and-test approach to define sensory features that capture important aspects of the environment. It uses linear regression to learn models that characterize context-dependent effects of the control laws for finding and following paths defined using constraints on the learned features. The agent abstracts these control laws, which interact with the continuous environment, to a finite set of actions that implement discrete state transitions. At this point, the agent has abstracted the robot's continuous world to a finite-state world and can use existing methods to learn its structure.

The learning agent's methods are evaluated on several simulated robots with different sensorimotor systems and environments.

Download full paper

Read David Pierce's Dissertation

Some people find the dissertation itself to be clearer than the journal paper version of this work.
[QR home: http://www.cs.utexas.edu/users/qr]