Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Gaussian processes for sample efficient reinforcement learning with RMAX-like exploration

Tobias Jung and Peter Stone. Gaussian processes for sample efficient reinforcement learning with RMAX-like exploration. In The European Conference on Machine Learning (ECML), September 2010.

Download

[PDF]417.0kB  [postscript]6.5MB  

Abstract

We present an implementation of model-based online reinforcement learning (RL) for continuous domains with deterministic transitions that is specifically designed to achieve low sample complexity. To achieve low sample complexity, since the environment is unknown, an agent must intelligently balance exploration and exploitation, and must be able to rapidly generalize from observations. While in the past a number of related sample efficient RL algorithms have been proposed, to allow theoretical analysis, mainly model-learners with weak generalization capabilities were considered. Here, we separate function approximation in the model-learner (which does require samples) from the interpolation in the planner (which does not require samples). For model-learning we apply Gaussian processes regression (GP) which is able to automatically adjust itself to the complexity of the problem (via Bayesian hyperparameter selection) and, in practice, often able to learn a highly accurate model from very little data. In addition, a GP provides a natural way to determine the uncertainty of its predictions, which allows us to implement the ``optimism in the face of uncertainty'' principle used to efficiently control exploration. Our method is evaluated on four common benchmark domains.

BibTeX Entry

@InProceedings{ECML10-jung,
	author    = "Tobias Jung and Peter Stone",
	title     = "Gaussian processes for sample efficient reinforcement learning with {RMAX}-like exploration",
	booktitle = "The European Conference on Machine Learning (ECML)",
	month     = "September",
	year      = "2010",
	abstract  = {
		We present an implementation of model-based online
		reinforcement learning (RL) for continuous domains with deterministic
		transitions that is specifically designed to achieve low sample
		complexity. To achieve low sample complexity, since the environment is
		unknown, an agent must intelligently balance exploration and
		exploitation, and must be able to rapidly generalize from observations.
		While in the past a number of related sample efficient RL algorithms
		have been proposed, to allow theoretical analysis, mainly model-learners
		with weak generalization capabilities were considered. Here, we separate
		function approximation in the model-learner (which does require samples)
		from the interpolation in the planner (which does not require samples).
		For model-learning we apply Gaussian processes regression (GP) which is
		able to automatically adjust itself to the complexity of the problem
		(via Bayesian hyperparameter selection) and, in practice, often able to
		learn a highly accurate model from very little data. In addition, a GP
		provides a natural way to determine the uncertainty of its predictions,
		which allows us to implement the ``optimism in the face of uncertainty''
		principle used to efficiently control exploration. Our method is
		evaluated on four common benchmark domains.
	},
}

Generated by bib2html.pl (written by Patrick Riley ) on Fri Aug 15, 2014 16:26:03