Improving Action Selection in MDP's via Knowledge Transfer (2005)
Alexander A. Sherstov and Peter Stone
Temporal-difference reinforcement learning (RL) has been successfully applied in several domains with large emphstate sets. Large emphaction sets, however, have received considerably less attention. This paper demonstrates the use of knowledge transfer between related tasks to accelerate learning with large action sets. We introduce emphaction transfer, a technique that extracts the actions from the mbox(near-)optimal solution to the first task and uses them in place of the full action set when learning any subsequent tasks. When optimal actions make up a small fraction of the domain's action set, action transfer can substantially reduce the number of actions and thus the complexity of the problem. However, action transfer between emphdissimilar tasks can be detrimental. To address this difficulty, we contribute emphrandomized task perturbation (RTP), an enhancement to action transfer that makes it robust to unrepresentative source tasks. We motivate RTP action transfer with a detailed theoretical analysis featuring a formalism of related tasks and a bound on the suboptimality of action transfer. The empirical results in this paper show the potential of RTP action transfer to substantially expand the applicability of RL to problems with large action sets.
View:
PDF, PS, HTML
Citation:
In Proceedings of the Twentieth National Conference on Artificial Intelligence, July 2005.
Bibtex:

Peter Stone Faculty pstone [at] cs utexas edu