UTCS Artificial Intelligence
courses
talks/events
demos
people
projects
publications
software/data
labs
areas
admin
Deep Reinforcement Learning in Parameterized Action Space (2016)
Matthew Hausknecht and
Peter Stone
Recent work has shown that deep neural networks are capable of approximating both value functions and policies in reinforcement learning domains featuring continuous state and action spaces. However, to the best of our knowledge no previous work has succeeded at using deep neural networks in structured (parameterized) continuous action spaces. To fill this gap, this paper focuses on learning within the domain of simulated RoboCup soccer, which features a small set of discrete action types, each of which is parameterized with continuous variables. The best learned agents can score goals more reliably than the 2012 RoboCup champion agent. As such, this paper represents a successful extension of deep reinforcement learning to the class of parameterized action space MDPs.
View:
PDF
,
HTML
Citation:
In
Proceedings of the International Conference on Learning Representations (ICLR)
, San Juan, Puerto Rico, May 2016.
Bibtex:
@inproceedings{ICLR16-hausknecht, title={Deep Reinforcement Learning in Parameterized Action Space}, author={Matthew Hausknecht and Peter Stone}, booktitle={Proceedings of the International Conference on Learning Representations (ICLR)}, month={May}, address={San Juan, Puerto Rico}, url="http://www.cs.utexas.edu/users/ai-lab?hausknecht:iclr16", year={2016} }
People
Peter Stone
Faculty
pstone [at] cs utexas edu
Areas of Interest
Deep Learning
Reinforcement Learning
RoboCup
Simulated Robot Soccer
Labs
Learning Agents