Where do actions come from? Autonomous robot learning of objects and actions (2007)
Decades of AI research have yielded techniques for learning, inference, and planning that depend on human-provided ontologies of self, space, time, objects, actions, and properties. Since robots are constructed with low-level sensor and motor interfaces that do not provide these concepts, the human robotics researcher must create the bindings between the required high-level concepts and the available low-level interfaces. This raises the developmental learning problem for robots of how a learning agent can create high-level concepts from its own low-level experience. Prior work has shown how objects can be individuated from low-level sensation, and certain properties can be learned for individual objects. This work shows how high-level actions can be learned autonomously by searching for control laws that reliably change these properties in predictable ways. We present a robust and efficient algorithm that creates reliable control laws for perceived objects. We demonstrate on a physical robot how these high-level actions can be learned from the robot's own experiences, and can then applied to a learned object to achieve a desired goal.
In AAAI Spring Symposium Series 2007, Control Mechanisms for Spatial Knowledge Processing in Cognitive / Intelligent Systems 2007.

Benjamin Kuipers Formerly affiliated Faculty kuipers [at] cs utexas edu
Joseph Modayil Ph.D. Alumni modayil [at] cs utexas edu