Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Importance Sampling Policy Evaluation with an Estimated Behavior Policy

Importance Sampling Policy Evaluation with an Estimated Behavior Policy.
Josiah Hanna, Scott Niekum, and Peter Stone.
In Proceedings of the 36th International Conference on Machine Learning (ICML), June 2019.

Download

[PDF]2.7MB  [slides.pdf]4.0MB  

Abstract

We consider the problem of off-policy evaluation in Markov decision processes. Off-policy evaluation is the task of evaluating the expected return of one policy with data generated by a different, behavior policy. Importance sampling is a technique for off-policy evaluation that re-weights off-policy returns to account for differences in the likelihood of the returns between the two policies. In this paper, we study importance sampling with an estimated behavior policy where the behavior policy estimate comes from the same set of data used to compute the importance sampling estimate. We find that this estimator often lowers the mean squared error of off-policy evaluation compared to importance sampling with the true behavior policy or using a behavior policy that is estimated from a separate data set. Intuitively, estimating the behavior policy in this way corrects for error due to sampling in the action-space. Our empirical results also extend to other popular variants of importance sampling and show that estimating a non-Markovian behavior policy can further lower large-sample mean squared error even when the true behavior policy is Markovian.

BibTeX Entry

@InProceedings{ICML2019-Hanna,
  author={Josiah Hanna and Scott Niekum and Peter Stone},
  title={Importance Sampling Policy Evaluation with an Estimated Behavior Policy},
  booktitle={Proceedings of the 36th International Conference on Machine Learning (ICML)},
  location={Long Beach, California, U.S.A.},
  month={June},
  year={2019},
  abstract={
We consider the problem of off-policy evaluation in Markov decision processes. 
Off-policy evaluation is the task of evaluating the expected return of one 
policy with data generated by a different, behavior policy. Importance sampling 
is a technique for off-policy evaluation that re-weights off-policy returns to 
account for differences in the likelihood of the returns between the two 
policies. In this paper, we study importance sampling with an estimated behavior
 policy where the behavior policy estimate comes from the same set of data used 
 to compute the importance sampling estimate. We find that this estimator often 
 lowers the mean squared error of off-policy evaluation compared to importance 
 sampling with the true behavior policy or using a behavior policy that is 
 estimated from a separate data set. Intuitively, estimating the behavior policy
  in this way corrects for error due to sampling in the action-space. Our 
  empirical results also extend to other popular variants of importance sampling
   and show that estimating a non-Markovian behavior policy can further lower 
   large-sample mean squared error even when the true behavior policy is 
   Markovian.
},
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Apr 17, 2024 18:42:52