Transfer via Inter-Task Mappings in Policy Search Reinforcement Learning (2007)
Matthew E. Taylor, Shimon Whiteson, and Peter Stone
The ambitious goal of transfer learning is to accelerate learning on a target task after training on a different, but related, source task. While many past transfer methods have focused on transferring value-functions, this paper presents a method for transferring policies across tasks with different state and action spaces. In particular, this paper utilizes transfer via inter-task mappings for policy search methods (sc tvitm-ps) to construct a transfer functional that translates a population of neural network policies trained via policy search from a source task to a target task. Empirical results in robot soccer Keepaway and Server Job Scheduling show that sc tvitm-ps can markedly reduce learning time when full inter-task mappings are available. The results also demonstrate that sc tvitm-ps still succeeds when given only incomplete inter-task mappings. Furthermore, we present a novel method for emphlearning such mappings when they are not available, and give results showing they perform comparably to hand-coded mappings.
View:
PDF, PS, HTML
Citation:
In Proceedings of the 6th International Joint Conference on Autonomous Agents and Multiagent Systems, May 2007.
Bibtex:

Peter Stone Faculty pstone [at] cs utexas edu
Matthew Taylor Ph.D. Alumni taylorm [at] eecs wsu edu