Transfer Learning via Inter-Task Mappings for Temporal Difference Learning (2007)
Temporal difference (TD) learning has become a popular reinforcement learning technique in recent years. TD methods, relying on function approximators to generalize learning to novel situations, have had some experimental successes and have been shown to exhibit some desirable properties in theory, but the most basic algorithms have often been found slow in practice. This empirical result has motivated the development of many methods that speed up reinforcement learning by modifying a task for the learner or helping the learner better generalize to novel situations. This article focuses on generalizing across tasks, thereby speeding up learning, via a novel form of transfer using handcoded task relationships. We compare learning on a complex task with three function approximators, a cerebellar model arithmetic computer (CMAC), an artificial neural network (ANN), and a radial basis function (RBF), and empirically demonstrate that directly transferring the action-value function can lead to a dramatic speedup in learning with all three. Using transfer via inter-task mapping (tvitm), agents are able to learn one task and then markedly reduce the time it takes to learn a more complex task. Our algorithms are fully implemented and tested in the RoboCup soccer Keepaway domain.
View:
PDF, PS, HTML
Citation:
Journal of Machine Learning Research, Vol. 8, 1 (2007), pp. 2125-2167.
Bibtex:

Yaxin Liu Postdoctoral Alumni
Peter Stone Faculty pstone [at] cs utexas edu
Matthew Taylor Ph.D. Alumni taylorm [at] eecs wsu edu