Towards a Data Efficient Off-Policy Policy Gradient (2018)
The ability to learn from off-policy data -- data generated from past interaction with the environment -- is essential to data efficient reinforcement learning. Recent work has shown that the use of off-policy data not only allows the re-use of data but can even improve performance in comparison to on-policy reinforcement learning. In this work we investigate if a recently proposed method for learning a better data generation policy, commonly called a behavior policy, can also increase the data efficiency of policy gradient reinforcement learning. Empirical results demonstrate that with an appropriately selected behavior policy we can estimate the policy gradient more accurately. The results also motivate further work into developing methods for adapting the behavior policy as the policy we are learning changes.
View:
PDF, HTML
Citation:
In AAAI Spring Symposium on Data Efficient Reinforcement Learning, Palo Alto, CA, March 2018.
Bibtex:

Josiah Hanna Ph.D. Student jphanna [at] cs utexas edu
Peter Stone Faculty pstone [at] cs utexas edu