Research Interests

Todd I am a post-doc in the Department of Computer Science and a research educator with the Freshmen Research Initiative, both at the University of Texas at Austin. I completed my Ph.D. in December 2012 on the development of new reinforcement learning algorithms for learning on robots. My advisor is Peter Stone and I am a member of the Learning Agents Research Group at UT. I am also a part of the UT Austin Villa robot soccer team, which won the 2012 World Championship in the Standard Platform League. I am researching reinforcement learning and robotics, specifically developing new reinforcement learning algorithms that are applicable to robots and other large, complex domains.

Before coming to UT, I worked in the Motion Analysis Laboratory at Spaulding Rehabilitation Hospital in Boston. There I worked on methods to evaluate the mobility of stroke, arthritis, and Parkinson's patients using wearable sensors. We used machine learning techniques to analyze wearable sensor data and quantify the quality of the patients' movements. I've also worked on a number of projects on my own, including building my own robot from scratch, writing a program to predict the scores of NFL football games based on machine learning techniques, and writing a 3D Connect Four game.

Robot Soccer

I have been a member of the UT Austin Villa robot soccer team since 2006 and participated in the Legged League and Standard Platform League at RoboCup. In 2012, we won the world championship in the Standard Platform League, as well as winning the US Open. We also won the US Open in 2009 and 2010. My research focus on robot soccer started with localization (See ICRA paper), but has moved on to encompass all parts of the robot soccer code, including, vision, locomotion, behaviors, coordination, and debug (See team paper).

Reinforcement Learning

Reinforcement Learning is a learning method where an agent can learn to act optimally by interacting with its environment. The agent is in some state and chooses from a set of available actions. Its action leads it to a new state and gives it some reward, which it tries to optimize over time. I am specifically focused on model-based reinforcement learning, where an agent learns a model of its environment and can learn a policy by simulating actions in its model. I have been developing an RL algorithm called TEXPLORE, which extends these model-based approaches to larger domains by incorporating generalization into the learning of the model (See ICDL paper, ICRA paper, and video). In addition, I am examining the problem of exploration versus exploitation, looking at when the agent should exploit what it thinks it knows compared to when it should explore more of the environment.

Links:

Curriculum Vitae
Ph.D. Dissertation
Thesis Defense Slides
University of Texas Dept of Computer Science

Contact Info:

Office: GDC 3.418
E-mail: todd AT cs DOT utexas DOT edu

Teaching

This semester, I am instructing the CS 378: Autonomous Intelligent Robotics (FRI) course.

In the Fall 2012 semester, I instructed the CS 344M: Autonomous Multiagent Systems course.

In the Spring 2012 semester, I was the TA for CS378: Autonomous Vehicles in Traffic I. This course is part of the freshman research initiative (FRI).

In the Fall 2009 semester, I was the TA for CS393R: Autonomous Robotics. I won the department's Outstanding TA Award.

In Spring 2009, I was a TA for CS307 Foundations of Computing..

Open Source Code

I have a released a package (rl-texplore-ros-pkg) of reinforcement learning code for ROS. It contains a set of RL agents and environments, as well as a formalism for them to communicate through the use of ROS messages. In particular, the set of RL agents includes an implementation of our TEXPLORE agent (See our ICDL paper) and our real-time architecture for model-based agents (See our ICRA paper). A common interface is defined for agents, environments, models, and planners. Therefore, it should be easy to add new agents, or add new model learning or planning methods to the existing general model based agent. The real-time architecture should work with any model learning method that fits the defined interface. In addition, since the RL agents communicate using ROS messages, it is easy to integrate them with robots using an existing ROS architecture to perform reinforcement learning on robots.

Ph.D. Thesis

Title: TEXPLORE: Temporal Difference Reinforcement Learning for Robots and Time-Constrained Domains.

Abstract:

Robots have the potential to solve many problems in society, because of their ability to work in dangerous places doing necessary jobs that no one wants or is able to do. One barrier to their widespread deployment is that they are mainly limited to tasks where it is possible to hand-program behaviors for every situation that may be encountered. For robots to meet their potential, they need methods that enable them to learn and adapt to novel situations that they were not programmed for. Reinforcement learning (RL) is a paradigm for learning sequential decision making processes that could solve the problems of learning and adaptation on robots. This thesis identifies four key challenges that must be addressed for an RL algorithm to be practical for robotic control tasks. These RL for Robotics Challenges are: 1) it must learn in very few samples; 2) it must learn in domains with continuous state features; 3) it must handle sensor and/or actuator delays; and 4) it should continually take actions in real-time. This thesis focuses on addressing all four of these challenges. In particular, this thesis is focused on time-constrained domains where the first challenge is critically important. In these domains, the agentís lifetime is not long enough for it to explore the domain thoroughly, and it must learn in very few samples.

Although existing RL algorithms successfully address one or more of the RL for Robotics Challenges, no prior algorithm addresses all four of them. To fill this gap, this thesis introduces TEXPLORE, the first algorithm to address all four challenges. TEXPLORE is a model-based RL method that learns a random forest model of the domain which generalizes dynamics to unseen states. Each tree in the random forest model represents a hypothesis of the domainís true dynamics, and the agent uses these hypotheses to explores states that are promising for the final policy, while ignoring states that do not appear promising. With sample-based planning and a novel parallel architecture, TEXPLORE can select actions continually in real-time whenever necessary.

We empirically evaluate each component of TEXPLORE in comparison with other state-of-the-art approaches. In addition, we present modifications of TEXPLOREís exploration mechanism for different types of domains. The key result of this thesis is a demonstration of TEXPLORE learning to control the velocity of an autonomous vehicle on-line, in real-time, while running on-board the robot. After controlling the vehicle for only two minutes, TEXPLORE is able to learn to move the pedals of the vehicle to drive at the desired velocities. The work presented in this thesis represents an important step towards applying RL to robotics and enabling robots to perform more tasks in society. By enabling robots to learn in few actions while acting on-line in real-time on robots with continuous state and actuator delays, TEXPLORE significantly broadens the applicability of RL to robots.

Dissertation
Defense Slides

Publications

Books

Journal Articles

Book Chapters

Refereed Conferences

Refereed Workshop Papers

Technical Reports

Invited Talks

Organizing Committees

Program Committees

Technical Committees

RoboCup Results

Videos

2012 RoboCup Final: Austin Villa vs. B-Human

Video of our 2012 SPL final against B-Human, who had won the last 3 years and had never lost a game.

2012 RoboCup Semi-Final: Austin Villa vs. rUNSWift

Video of our 2012 SPL semi-final against rUNSWift. This was a very exciting game. We spent the entire game tied, down 1, or down 2, until taking a lead with 1:30 left and holding on to win.

2010 RoboCup Highlights

Highlights of TT-UT Austin Villa at the 2010 RoboCup Standard Platform League competition in Singapore, where the team took 3rd place.

Learning to Score Penalty Kicks via Reinforcement Learning

The accompanying video for our ICRA 2010 paper, where we learn to score penalty kicks via a novel model-based reinforcement learning method.

2009 RoboCup Highlights

Highlights of TT-UT Austin Villa at the 2009 RoboCup Standard Platform League. TT-UT Austin Villa finished in 4th place, losing to only two teams during the tournament.

2009 US Open Highlights

Highlights of TT-UT Austin Villa at the 2009 US Open. TT-UT Austin Villa won the 2009 US Open with a finals win over UPenn (1-1 tie, 3-2 in penalty kicks).

Aibo Highlights

This video shows highlights (both shots and saves) from demonstrations held during Explore UT on March 7, 2009.