Peggy Fidelman and Peter Stone
When developing skills on a physical robot, it is appealing to turn to modern machine learning methods in order to automate the process. However, when no accurate simulator exists for the type of motion in question, all learning must occur on the physical robot itself. In such a case, there is a high premium on quick, efficient learning (specifically, learning with low sample complexity). Recent results in learning locomotion have demonstrated the feasibility of learning fast walks directly on quadrupedal robots. This paper demonstrates that it is also possible to learn a higher-level skill requiring more fine motor coordination, again with all learning occurring directly on the robot. In particular, the paper presents a learned ball-grasping skill on a commercially available Sony Aibo robot, with no human intervention other than battery changes. The learned skill significantly outperforms our best hand-tuned solution. As the learned grasping skill relies on a learned walk, we characterize our learning implementation within the layered learning formalism. To our knowledge, the two learned layers represent the first use of layered learning on a physical robot.
In RoboCup-2006: Robot Soccer World Cup X, Gerhard Lakemeyer and Elizabeth Sklar and Domenico Sorenti and Tomoichi Takahashi (Eds.), Vol. 4434, pp. 59-71, Berlin 2007. Springer Verlag.

Peter Stone Faculty pstone [at] cs utexas edu