Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


An Empirical Analysis of Value Function-Based and Policy Search Reinforcement Learning

Shivaram Kalyanakrishnan and Peter Stone. An Empirical Analysis of Value Function-Based and Policy Search Reinforcement Learning. In The Eighth International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 749–756, International Foundation for Autonomous Agents and Multiagent Systems, May 2009.
AAMAS 2009

Download

[PDF]366.4kB  [postscript]1.7MB  

Abstract

In several agent-oriented scenarios in the real world, an autonomous agent that is situated in an unknown environment must learn through a process of trial and error to take actions that result in long-term benefit. Reinforcement Learning (or sequential decision making) is a paradigm well-suited to this requirement. Value function-based methods and policy search methods are contrasting approaches to solve reinforcement learning tasks. While both classes of methods benefit from independent theoretical analyses, these often fail to extend to the practical situations in which the methods are deployed. We conduct an emperical study to examine the strengths and weaknesses of these approaches by introducing a suite of test domains that can be varied for problem size, stochasticity, function approximation, and partial observability. Our results indicate clear patterns in the domain characteristics for which each class of methods excels. We investigate whether their strengths can be combine, and develop an approach to achieve that purpose. The effectiveness of this approach is also demonstrated on the challenging benchmark task of robot soccer Keepaway. We highlight several lines of inquiry that emanate from this study.

BibTeX Entry

@InProceedings{AAMAS09-kalyanakrishnan,
  author = {Shivaram Kalyanakrishnan and Peter Stone},
  title = {An Empirical Analysis of Value Function-Based and Policy Search Reinforcement Learning},
  booktitle = "The Eighth International Conference on Autonomous Agents and Multiagent Systems (AAMAS)",
  location = "Budapest, Hungary",
  month = "May",
  year = "2009",
  pages="749--756",
  location  = {Budapest, Hungary},
  isbn      = {978-0-9817381-7-8},
  publisher = {International Foundation for Autonomous Agents and Multiagent Systems},
  abstract  = {
	In several agent-oriented scenarios in the real world, an autonomous
	agent that is situated in an unknown environment must learn through
	a process of trial and error to take actions that result in long-term
	benefit.  Reinforcement Learning (or sequential decision making) is a
	paradigm well-suited to this requirement.  Value function-based methods
	and policy search methods are contrasting approaches to solve
	reinforcement learning tasks.  While both classes of methods benefit
	from independent theoretical analyses, these often fail to extend to
	the practical situations in which the methods are deployed.  We conduct
	an emperical study to examine the strengths and weaknesses of these
	approaches by introducing a suite of test domains that can be varied
	for problem size, stochasticity, function approximation, and partial
	observability.  Our results indicate clear patterns in the domain
	characteristics for which each class of methods excels.  We
	investigate whether their strengths can be combine, and develop an
	approach to achieve that purpose.  The effectiveness of this approach
	is also demonstrated on the challenging benchmark task of robot soccer
	Keepaway.  We highlight several lines of inquiry that emanate from this
	study.
  },
  wwwnote={<a href="http://www.conferences.hu/AAMAS2009/">AAMAS 2009</a>},
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Sep 24, 2014 22:15:11