UTCS Artificial Intelligence
courses
talks/events
demos
people
projects
publications
software/data
labs
areas
admin
The Utility of Temporal Abstraction in Reinforcement Learning (2008)
Nicholas K. Jong
,
Todd Hester
, and
Peter Stone
The hierarchical structure of real-world problems has motivated extensive research into temporal abstractions for reinforcement learning, but precisely how these abstractions allow agents to improve their learning performance is not well understood. This paper investigates the connection between temporal abstraction and an agent's exploration policy, which determines how the agent's performance improves over time. Experimental results with standard methods for incorporating temporal abstractions show that these methods benefit learning only in limited contexts. The primary contribution of this paper is a clearer understanding of how hierarchical decompositions interact with reinforcement learning algorithms, with important consequences for the manual design or automatic discovery of action hierarchies.
View:
PDF
,
PS
,
HTML
Citation:
In
The Seventh International Joint Conference on Autonomous Agents and Multiagent Systems
, May 2008.
Bibtex:
@InProceedings{AAMAS08-jong, title={The Utility of Temporal Abstraction in Reinforcement Learning}, author={Nicholas K. Jong and Todd Hester and Peter Stone}, booktitle={The Seventh International Joint Conference on Autonomous Agents and Multiagent Systems}, month={May}, url="http://www.cs.utexas.edu/users/ai-lab?AAMAS08-jong", year={2008} }
People
Todd Hester
Postdoctoral Alumni
todd [at] cs utexas edu
Nicholas Jong
Ph.D. Alumni
nickjong [at] me com
Peter Stone
Faculty
pstone [at] cs utexas edu
Areas of Interest
Machine Learning
Planning
Labs
Learning Agents