Source Task Creation for Curriculum Learning (2016)
Transfer learning in reinforcement learning has been an active area of research over the past decade. In transfer learning, training on a source task is leveraged to speed up or otherwise improve learning on a target task. This paper presents the more ambitious problem of curriculum learning in reinforcement learning, in which the goal is to design a sequence of source tasks for an agent to train on, such that final performance or learning speed is improved. We take the position that each stage of such a curriculum should be tailored to the current ability of the agent in order to promote learning new behaviors. Thus, as a first step towards creating a curriculum, the trainer must be able to create novel, agent-specific source tasks. We explore how such a space of useful tasks can be created using a parameterized model of the domain and observed trajectories on the target task. We experimentally show that these methods can be used to form components of a curriculum and that such a curriculum can be used successfully for transfer learning in 2 challenging multiagent reinforcement learning domains.
In Proceedings of the 15th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2016), Singapore, May 2016.

Slides (PDF)
Matteo Leonetti Postdoctoral Alumni matteo [at] cs utexas edu
Sanmit Narvekar Ph.D. Student sanmit [at] cs utexas edu
Jivko Sinapov Postdoctoral Alumni jsinapov [at] cs utexas edu
Peter Stone Faculty pstone [at] cs utexas edu