Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


The Utility of Temporal Abstraction in Reinforcement Learning

Nicholas K. Jong, Todd Hester, and Peter Stone. The Utility of Temporal Abstraction in Reinforcement Learning. In The Seventh International Joint Conference on Autonomous Agents and Multiagent Systems, May 2008.
AAMAS-2008

Download

[PDF]136.4kB  [postscript]325.7kB  

Abstract

The hierarchical structure of real-world problems has motivated extensive research into temporal abstractions for reinforcement learning, but precisely how these abstractions allow agents to improve their learning performance is not well understood. This paper investigates the connection between temporal abstraction and an agent's exploration policy, which determines how the agent's performance improves over time. Experimental results with standard methods for incorporating temporal abstractions show that these methods benefit learning only in limited contexts. The primary contribution of this paper is a clearer understanding of how hierarchical decompositions interact with reinforcement learning algorithms, with important consequences for the manual design or automatic discovery of action hierarchies.

BibTeX Entry

@InProceedings{AAMAS08-jong,
  author="Nicholas K.\ Jong and Todd Hester and Peter Stone",
  title="The Utility of Temporal Abstraction in Reinforcement Learning",
  booktitle="The Seventh International Joint Conference on Autonomous Agents and  Multiagent Systems",
  month="May",
  year="2008",
  abstract={ The hierarchical structure of real-world problems has
             motivated extensive research into temporal
             abstractions for reinforcement learning, but
             precisely how these abstractions allow agents to
             improve their learning performance is not well
             understood.  This paper investigates the connection
             between temporal abstraction and an agent's
             exploration policy, which determines how the agent's
             performance improves over time.  Experimental results
             with standard methods for incorporating temporal
             abstractions show that these methods benefit learning
             only in limited contexts.  The primary contribution
             of this paper is a clearer understanding of how
             hierarchical decompositions interact with
             reinforcement learning algorithms, with important
             consequences for the manual design or automatic
             discovery of action hierarchies.  },
  wwwnote={<a href="http://gaips.inesc-id.pt/aamas2008/">AAMAS-2008</a>},
}

Generated by bib2html.pl (written by Patrick Riley ) on Fri Aug 15, 2014 16:26:03