The Utility of Temporal Abstraction in Reinforcement Learning (2008)
The hierarchical structure of real-world problems has motivated extensive research into temporal abstractions for reinforcement learning, but precisely how these abstractions allow agents to improve their learning performance is not well understood. This paper investigates the connection between temporal abstraction and an agent's exploration policy, which determines how the agent's performance improves over time. Experimental results with standard methods for incorporating temporal abstractions show that these methods benefit learning only in limited contexts. The primary contribution of this paper is a clearer understanding of how hierarchical decompositions interact with reinforcement learning algorithms, with important consequences for the manual design or automatic discovery of action hierarchies.
View:
PDF, PS, HTML
Citation:
In The Seventh International Joint Conference on Autonomous Agents and Multiagent Systems, May 2008.
Bibtex:

Todd Hester Postdoctoral Alumni todd [at] cs utexas edu
Nicholas Jong Ph.D. Alumni nickjong [at] me com
Peter Stone Faculty pstone [at] cs utexas edu