Importance Sampling Policy Evaluation with an Estimated Behavior Policy (2019)
Josiah Hanna, Scott Niekum, and Peter Stone
We consider the problem of off-policy evaluation in Markov decision processes. Off-policy evaluation is the task of evaluating the expected return of one policy with data generated by a different, behavior policy. Importance sampling is a technique for off-policy evaluation that re-weights off-policy returns to account for differences in the likelihood of the returns between the two policies. In this paper, we study importance sampling with an estimated behavior policy where the behavior policy estimate comes from the same set of data used to compute the importance sampling estimate. We find that this estimator often lowers the mean squared error of off-policy evaluation compared to importance sampling with the true behavior policy or using a behavior policy that is estimated from a separate data set. Intuitively, estimating the behavior policy in this way corrects for error due to sampling in the action-space. Our empirical results also extend to other popular variants of importance sampling and show that estimating a non-Markovian behavior policy can further lower large-sample mean squared error even when the true behavior policy is Markovian.
View:
PDF
Citation:
In Proceedings of the 36th International Conference on Machine Learning (ICML), Long Beach, California, U.S.A., June 2019.
Bibtex:

Presentation:
Slides (PDF)
Josiah Hanna Ph.D. Student jphanna [at] cs utexas edu
Peter Stone Faculty pstone [at] cs utexas edu