• Classified by Topic • Classified by Publication Type • Sorted by Date • Sorted by First Author Last Name • Classified by Funding Source •
Transferring Instances for Model-Based Reinforcement Learning.
Matthew E. Taylor,
Nicholas K. Jong, and Peter
Stone.
In Machine Learning and Knowledge Discovery in Databases, pp. 488–505, September 2008.
Official
version from Publisher's Webpage© Springer-Verlag
[PDF]304.9kB [postscript]860.1kB
Recent work in transfer learning has succeeded in Reinforcement learning agents typically require a significant amount of data before performing well on complex tasks. Transfer learning methods have made progress reducing sample complexity, but they have primarily been applied to model-free learning methods, not more data-efficient model-based learning methods. This paper introduces TIMBREL, a novel method capable of transferring information effectively into a model-based reinforcement learning algorithm. We demonstrate that TIMBREL can significantly improve the sample efficiency and asymptotic performance of a model-based algorithm when learning in a continuous state space. Additionally, we conduct experiments to test the limits of TIMBREL's effectiveness.
@inproceedings(ECML08-taylor,
author="Matthew E.\ Taylor and Nicholas K.\ Jong and Peter Stone",
title="Transferring Instances for Model-Based Reinforcement Learning",
booktitle="Machine Learning and Knowledge Discovery in Databases",
month="September",
year= "2008",
series="Lecture Notes in Artificial Intelligence",
volume="5212",
pages="488--505",
wwwnote={<a href="http://www.ecmlpkdd2008.org/">ECML-2008</a>},
abstract={Recent work in transfer learning has succeeded in
Reinforcement learning agents typically require a significant
amount of data before performing well on complex tasks. Transfer
learning methods have made progress reducing sample complexity,
but they have primarily been applied to model-free learning
methods, not more data-efficient model-based learning
methods. This paper introduces TIMBREL, a novel method capable of
transferring information effectively into a model-based
reinforcement learning algorithm. We demonstrate that TIMBREL can
significantly improve the sample efficiency and asymptotic
performance of a model-based algorithm when learning in a
continuous state space. Additionally, we conduct experiments to
test the limits of TIMBREL's effectiveness.},
wwwnote={Official version from <a href="http://dx.doi.org/978-3-540-87481-2_32">Publisher's Webpage</a>© Springer-Verlag},
)
Generated by bib2html.pl (written by Patrick Riley ) on Thu Oct 23, 2025 16:14:19