Behavior Transfer for Value-Function-Based Reinforcement Learning (2005)
Temporal difference (TD) learning methods have become popular reinforcement learning techniques in recent years. TD methods have had some experimental successes and have been shown to exhibit some desirable properties in theory, but have often been found very slow in practice. A key feature of TD methods is that they represent policies in terms of value functions. In this paper we introduce emphbehavior transfer, a novel approach to speeding up TD learning by transferring the learned value function from one task to a second related task. We present experimental results showing that autonomous learners are able to learn one multiagent task and then use behavior transfer to markedly reduce the total training time for a more complex task.
View:
PDF, PS, HTML
Citation:
In The Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, Frank Dignum and Virginia Dignum and Sven Koenig and Sarit Kraus and Munindar P. Singh and Michael Wooldridge (Eds.), pp. 53-59, New York, NY, July 2005. ACM Press.
Bibtex:

Peter Stone Faculty pstone [at] cs utexas edu
Matthew Taylor Ph.D. Alumni taylorm [at] eecs wsu edu