Reinforcement learning is a paradigm under which an agent seeks to improve its policy by making learning updates based on the experiences it gathers through interaction with the environment. emphModel-free algorithms perform updates solely bas ed on observed experiences. By contrast, emphmodel-based algorithms learn a model of the environment that effectively simulates its dynamics. The model may be used to simulate experiences or to plan into the future, potentially expediting the learning process. This paper presents a model-based reinforcement learning approach for Keepaway, a complex, continuous, stochastic, multiagent subtask of RoboCup simulated soccer. First, we propose the design of an environmental model that is partly learned based on the agent's experiences. This model is then coupled with the reinforcement learning algorithm to learn an action selection policy. We evaluate our method through empirical comparisons with model-free approaches that have been previously applied successfully to this task. Results demonstrate significant gains in the learning speed and asymptotic performance of our method. We also show that the learned model can be used effectively as part of a planning-based approach with a hand-coded policy.
In RoboCup-2007: Robot Soccer World Cup XI, Ubbo Visser and Fernando Ribeiro and Takeshi Ohashi and Frank Dellaert (Eds.), Vol. 5001, pp. 171-83, Berlin 2008. Springer Verlag.

Shivaram Kalyanakrishnan Ph.D. Alumni shivaram [at] cs utexas edu
Yaxin Liu Postdoctoral Alumni
Peter Stone Faculty pstone [at] cs utexas edu