On Sampling Error in Batch Action-Value Prediction Algorithms (2020)
Estimating a policy's action-values is a fundamental aspect of reinforcement learning. In this work, we study the application of TD methods for learning action-values in an offline setting with a fixed batch of data. Motivated by recent work, we observe that a fixed batch of offline data may contain two forms of distribution shift: the data may be collected from a different behavior policy than the target policy (off-policy data) and the empirical distribution of the data may differ from the sampling distribution of the data (sampling error). In this work, we focus on the second problem by analyzing the sampling error that arises due to variance in sampling from a finite-sized batch of data in the RL setting. We study how action-value learning algorithms suffer from this sampling error by considering their so-called certainty-equivalence estimates. We prove that each algorithm uses its certainty-equivalence estimates of the policy and transition dynamics to converge to its respective fixed-point. We then empirically evaluate each algorithm's performance by measuring the mean-squared value error on Gridworld. Ultimately, we find that by reducing sampling error, an algorithm can produce significantly accurate action-value estimations.
View:
PDF
Citation:
In In the Offline Reinforcement Learning Workshop at Neural Information Processing Systems (NeurIPS), December 2020., Remote (Virtual Conference), December 2020.
Bibtex:

Ishan Durugkar Ph.D. Student ishand [at] cs utexas edu
Josiah Hanna Ph.D. Student jphanna [at] cs utexas edu
Peter Stone Faculty pstone [at] cs utexas edu