Learning Ball Acquisition on a Physical Robot (2004)
Peggy Fidelman and Peter Stone
For a robot to learn to improve its performance based entirely on real-world environmental feedback, the robot's behavior specification and learning algorithm must be constructed so as to enable data-efficient learning. Building upon previous work enabling a quadrupedal robot to learn a fast walk with all of the training done on the physical robot and with no human intervention~citeAAAI04, we demonstrate the ability of the same robot to learn a more high-level, goal-oriented task using the same methodology. In particular, we enable the robot to learn to emphcapture (or ``grasp'') a ball. The learning occurs over about three hours of robot run time and generates a behavior that is significantly better than a baseline hand-coded behavior. Our method is fully implemented and tested on a Sony Aibo ERS-7 robot.
View:
PDF, PS, HTML
Citation:
In International Symposium on Robotics and Automation (ISRA) 2004.
Bibtex:

Peter Stone Faculty pstone [at] cs utexas edu