Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Reducing Sampling Error in Batch Temporal Difference Learning

Reducing Sampling Error in Batch Temporal Difference Learning.
Brahma Pavse, Ishan Durugkar, Josiah Hanna, and Peter Stone.
In Proceedings of the 37th International Conference on Machine Learning (ICML), July 2020.
The paper and talk is available from the ICML 2020 virtual conference page.

Download

[PDF]738.4kB  [slides.pdf]5.2MB  

Abstract

Temporal difference (TD) learning is one of the main foundations of modern reinforcement learning. This paper studies the use of TD(0), a canonical TD algorithm, to estimate the value function of a given policy from a batch of data. In this batch setting, we show that TD(0) may converge to an inaccurate value function because the update following an action is weighted according to the number of times that action occurred in the batch -- not the true probability of the action under the given policy. To address this limitation, we introduce policy sampling error corrected-TD(0) (PSEC-TD(0)). PSEC-TD(0) first estimates the empirical distribution of actions in each state in the batch and then uses importance sampling to correct for the mismatch between the empirical weighting and the correct weighting for updates following each action. We refine the concept of a certainty-equivalence estimate and argue that PSEC-TD(0) is a more data efficient estimator than TD(0) for a fixed batch of data. Finally, we conduct an empirical evaluation of PSEC-TD(0) on three batch value function learning tasks, with a hyperparameter sensitivity analysis, and show that PSEC-TD(0) produces value function estimates with lower mean squared error than TD(0).

BibTeX Entry

@InProceedings{ICML2020-Pavse,
  author={Brahma Pavse and Ishan Durugkar and Josiah Hanna and Peter Stone},
  title={Reducing Sampling Error in Batch Temporal Difference Learning},
  booktitle={Proceedings of the 37th International Conference on Machine Learning (ICML)},
  month={July},
  year={2020},
  location={Vienna, Austria (Virtual Conference)},
  abstract={
Temporal difference (TD) learning is one of the main foundations of modern 
reinforcement learning. This paper studies the use of TD(0), a canonical TD 
algorithm, to estimate the value function of a given policy from a batch of 
data. In this batch setting, we show that TD(0) may converge to an inaccurate 
value function because the update following an action is weighted according to 
the number of times that action occurred in the batch -- not the true 
probability of the action under the given policy. To address this limitation, 
we introduce \textit{policy sampling error corrected}-TD(0) (PSEC-TD(0)). 
PSEC-TD(0) first estimates the empirical distribution of actions in each state 
in the batch and then uses importance sampling to correct for the mismatch 
between the empirical weighting and the correct weighting for updates following 
each action. We refine the concept of a certainty-equivalence estimate and 
argue that PSEC-TD(0) is a more data efficient estimator than TD(0) for a fixed 
batch of data. Finally, we conduct an empirical evaluation of PSEC-TD(0) on 
three batch value function learning tasks, with a hyperparameter sensitivity 
analysis, and show that PSEC-TD(0) produces value function estimates with lower 
mean squared error than TD(0).
},
wwwnote={The paper and talk is available from the <a href="https://icml.cc/virtual/2020/poster/6626">ICML 2020 virtual conference page</a>.},
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Apr 17, 2024 18:42:52