PhD Final Oral Defense: Todd Hester, Monday, December 3, 2012, 11:30 AM, ACES 3.408

Contact Name: 
Lydia Griffith
Date: 
Dec 3, 2012 11:30am - 12:30pm

PhD Final Oral Defense: Todd Hester

Date: Monday, December 3, 2012
Time: 11:30 AM
Place: ACES 3.408
Research Supervisor: Peter Stone

Thesis title: TEXPLORE: Temporal Difference Reinforcement Learning for Robots and Time-Constrained Domains

Abstract:
Robots have the potential to solve many problems in society, because
of their ability to work in dangerous places doing necessary jobs that no one
wants or is able to do. One barrier to their widespread deployment is that they
are mainly limited to tasks where it is possible to hand-program behaviors for
every situation that may be encountered. For robots to meet their potential,
they need methods that enable them to learn and adapt to novel situations that
they were not programmed for. Reinforcement learning (RL) is a paradigm for
learning sequential decision making processes that could solve the problems of
learning and adaptation on robots. This thesis identifies four key challenges
that must be addressed for an RL algorithm to be practical for robotic control
tasks. These RL for Robotics Challenges are: 1) it must learn in very few
samples; 2) it must learn in domains with continuous state features; 3) it must
handle sensor and/or actuator delays; and 4) it should continually take actions
in real-time. This thesis focuses on addressing all four of these challenges. In
particular, this thesis is focused on time-constrained domains where the first
challenge is critically important. In these domains, the agent’s lifetime is not
long enough for it to explore the domain thoroughly, and it must learn in very
few samples.

Although existing RL algorithms successfully address one or more of the
RL for Robotics Challenges, no prior algorithm addresses all four of them. To
fill this gap, this thesis introduces TEXPLORE, the first algorithm to address
all four challenges. TEXPLORE is a model-based RL method that learns a
random forest model of the domain which generalizes dynamics to unseen
states. Each tree in the random forest model represents a hypothesis of the
domain’s true dynamics, and the agent uses these hypotheses to explores states
that are promising for the final policy, while ignoring states that do not appear
promising. With sample-based planning and a novel parallel architecture,
TEXPLORE can select actions continually in real-time whenever necessary.

We empirically evaluate each component of TEXPLORE in comparison
with other state-of-the-art approaches. In addition, we present modifications
of TEXPLORE’s exploration mechanism for different types of domains. The key
result of this thesis is a demonstration of TEXPLORE learning to control the
velocity of an autonomous vehicle on-line, in real-time, while running on-board
the robot. After controlling the vehicle for only two minutes, TEXPLORE is able
to learn to move the pedals of the vehicle to drive at the desired velocities. The
work presented in this thesis represents an important step towards applying RL
to robotics and enabling robots to perform more tasks in society. By enabling
robots to learn in few actions while acting on-line in real-time on robots with
continuous state and actuator delays, TEXPLORE significantly broadens the
applicability of RL to robots.