UTCS Artificial Intelligence
courses
talks/events
demos
people
projects
publications
software/data
labs
areas
admin
Autonomously Learning an Action Hierarchy Using a Learned Qualitative State Representation (2009)
Jonathan Mugan
and
Benjamin Kuipers
There has been intense interest in hierarchical reinforcement learning as a way to make Markov decision process planning more tractable, but there has been relatively little work on autonomously learning the hierarchy, especially in continuous domains. In this paper we present a method for learning a hierarchy of actions in a continuous environment. Our approach is to learn a qualitative representation of the continuous environment and then to define actions to reach qualitative states. Our method learns one or more options to perform each action. Each option is learned by first learning a dynamic Bayesian network (DBN). We approach this problem from a developmental robotics perspective. The agent receives no extrinsic reward and has no external direction for what to learn. We evaluate our work using a simulation with realistic physics that consists of a robot playing with blocks at a table.
View:
PDF
Citation:
In
Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI-09)
2009.
Bibtex:
@inproceedings{Mugan-ijcai-09, title={Autonomously Learning an Action Hierarchy Using a Learned Qualitative State Representation}, author={Jonathan Mugan and Benjamin Kuipers}, booktitle={Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI-09)}, url="http://www.cs.utexas.edu/users/ai-lab?Mugan-ijcai-09", year={2009} }
People
Benjamin Kuipers
Formerly affiliated Faculty
kuipers [at] cs utexas edu
Jonathan Mugan
Ph.D. Alumni
jmugan [at] cs utexas edu
Areas of Interest
Bootstrap Learning
Robotics