Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


TEXPLORE: Real-Time Sample-Efficient Reinforcement Learning for Robots

TEXPLORE: Real-Time Sample-Efficient Reinforcement Learning for Robots.
Todd Hester and Peter Stone.
Machine Learning, 90(3):385–429, 2013.
Official version from journal website.

Download

[PDF]987.8kB  [postscript]3.2MB  

Abstract

The use of robots in society could be expanded by using reinforcement learning (RL) to allow robots to learn and adapt to new situations online.RL is a paradigm for learning sequential decision making tasks, usually formulated as a Markov Decision Process (MDP). For an RL algorithm to be practical for robotic control tasks, it must learn invery few samples, while continually taking actions inreal-time. In addition, the algorithm must learn efficiently in the face of noise, sensor/actuator delays, and continuous state features.In this article, we present TEXPLORE, the first algorithm to address all of these challenges together. TEXPLORE is a model-based RL method that learns a random forest model of the domain which generalizes dynamics to unseen states. The agent explores states that are promising for the final policy, while ignoring states that do not appear promising. With sample-based planning and a novel parallel architecture, TEXPLORE can select actions continually in real-time whenever necessary. We empirically evaluate the importance of each component of TEXPLORE in isolation and then demonstrate the complete algorithm learning to control the velocity of an autonomous vehicle in real-time.

BibTeX Entry

@article{MLJ12-hester,
  author = {Todd Hester and Peter Stone},
  title = {{TEXPLORE}: Real-Time Sample-Efficient Reinforcement Learning for Robots},
  journal = {Machine Learning},
url = {http://dx.doi.org/10.1007/s10994-012-5322-7},
   doi = {10.1007/s10994-012-5322-7},
  year = {2013},
volume = {90},
number = {3},
pages="385--429",
wwwnote={<a href="http://www.springerlink.com/openurl.asp?genre=article&id=doi:10.1007/s10994-012-5322-7">Official version</a> from journal website.},
abstract = {The use of robots in society could be expanded by using reinforcement learning (RL) to allow robots to learn and adapt to new situations online.
RL is a paradigm for learning sequential decision making tasks, usually formulated as a Markov Decision Process (MDP). 
For an RL algorithm to be practical for robotic control tasks, it must learn in
very few samples, while continually taking actions in
real-time. In addition, the algorithm must learn efficiently in the face of noise, sensor/actuator delays, and continuous state features.
In this article, we present TEXPLORE, the first algorithm to address all of these challenges together. TEXPLORE is a model-based RL method that learns a random forest model of the domain which generalizes dynamics to unseen states. The agent explores states that are promising for the final policy, while ignoring states that do not appear promising. 
With sample-based planning and a novel parallel architecture, TEXPLORE can select actions continually in real-time whenever necessary. 
We empirically evaluate the importance of each component of TEXPLORE in isolation and then demonstrate the complete algorithm learning to control the velocity of an autonomous vehicle in real-time.},
}

Generated by bib2html.pl (written by Patrick Riley ) on Mon Mar 25, 2024 00:05:09