Humanoid Robots Learning to Walk Faster: From the Real World to Simulation and Back (2013)
Simulation is often used in research and industry as a low cost, high efficiency alternative to real model testing. Simulation has also been used to develop and test powerful learning algorithms. However, parameters learned in simulation often do not translate directly to the application, especially because heavy optimization in simulation has been observed to exploit the inevitable simulator simplifications, thus creating a gap between simulation and application that reduces the utility of learning in simulation. This paper introduces Grounded Simulation Learning (GSL), an iterative optimization framework for speeding up robot learning using an imperfect simulator. In GSL, a behavior is developed on a robot and then repeatedly: 1) the behavior is optimized in simulation; 2) the resulting behavior is tested on the real robot and compared to the expected results from simulation, and 3) the simulator is modified, using a machine-learning approach to come closer in line with reality. This approach is fully implemented and validated on the task of learning to walk using an Aldebaran Nao humanoid robot. Starting from a set of stable, hand-coded walk parameters, four iterations of this three-step optimization loop led to more than a 25% increase in the robot's walking speed.
View:
PDF, PS, HTML
Citation:
In Proc. of 12th Int. Conf. on Autonomous Agents and Multiagent Systems (AAMAS), May 2013.
Bibtex:

Samuel Barrett Ph.D. Alumni sbarrett [at] cs utexas edu
Patrick MacAlpine Ph.D. Student patmac [at] cs utexas edu
Peter Stone Faculty pstone [at] cs utexas edu