Robot Developmental Learning of Objects, Actions, and Tools

Planning a course of action to achieve a goal requires knowledge of the world, which is typically represented in terms of objects, actions, and relations, including the preconditions and consequences of actions. This high-level ontology of objects and actions makes it feasible for a reasoning agent with limited resources to construct plans to achieve many of its goals, much of the time. The problem we propose to solve is: How can high-level concepts of object and action be learned autonomously from experience with low-level sensorimotor interaction?

To carry out a high-level plan, a physically embodied robot requires its symbols to be grounded in its continuous sensorimotor world. Its sensory interface is a large vector of sense elements (e.g., camera pixels or range-sensor rays) and its motor interface accepts low-level incremental motor signals. We call these together the ``pixel-level'' sensorimotor interface between the continuous world and the agent's physical body.

In simple, short-lived robotic experiments on performing actions and recognizing objects, it is feasible to build perceptual features and motor control laws by hand. However, to cope with the complexity of the real world, robots will need richer sensory systems and more complex motor systems, capable of adapting to extensive changes. Learning will start with developmental learning to acquire and ground high-level concepts in the first place, and then will continue with life-long learning to adapt to changes in the world and in the robot's own capabilities.

Our hypothesis is that the concepts of object and action are learned as part of a larger package of concepts. These include (in approximately the following sequence): the concepts of figure and ground in the sensory image, objects distinguished from background by motion cues, simple actions based on open-loop control, distinction between self and non-self objects based on reliable actions, more complex actions based on closed-loop control, effects of actions and self objects on non-self objects, identification of grasp actions and graspable objects, effects of actions and grasped objects on non-self objects, and effects achievable only by using a grasped object, i.e. a tool.

Selected Publications

The full set of papers on bootstrap learning is available.


This work has taken place in the Intelligent Robotics Lab at the Artificial Intelligence Laboratory, The University of Texas at Austin. Research of the Intelligent Robotics lab is supported in part by grants from the Texas Advanced Research Program (3658-0170-2007), from the National Science Foundation (IIS-0413257, IIS-0713150, and IIS-0750011), and from the National Institutes of Health (EY016089).
[QR home]
BJK