- 8/31/20 — I started a postdoc at UC Berkeley working with Anca Dragan and Ken Goldberg.
- 7/20/20 — I successfully defended my PhD dissertation! Video of presentation here.
- 5/31/20 — Our paper Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences was accepted to ICML 2020.
- 2/01/20 — My research with Prof. Scott Niekum was featured in a recent Quanta magazine article.
- 11/18/19 — I passed my PhD dissertation proposal and advanced to candidacy!
- 10/10/19 — Our paper Deep Bayesian Reward Learning from Preferences was accepted to the 2019 NeurIPS Workshop on Safety and Robustness in Decision Making.
- 9/7/19 — Our paper Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations was accepted to the 2019 Conference on Robot Learning (CoRL).
- 6/1/19 — Code for our paper Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations is available on github: T-REX code. We also have a project page with videos: T-REX Project Page.
- 4/21/19 — Our paper on Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations was accepted to ICML 2019.
- 4/12/19 — Our paper on Learning from Suboptimal Demonstrations through Inverse Reinforcement Learning from Ranked Observations was accepted to RLDM 2019.
- 10/31/18 — Our paper on Machine Teaching for Inverse Reinforcement Learning accepted to AAAI 2019.
- 9/1/18 — Our paper on Risk-Aware Active Inverse Reinforcement Learning accepted to the 2018 Conference on Robot Learning.
- 8/17/18 — Our paper on our UT Austin Villa Robocup@Home Robot Architecture accepted to the AAAI 2018 Fall Symposium on Reasoning and Learning in Real-World Systems for Long-Term Autonomy.
- 11/10/17 — Code for our AAAI 2018 paper is now available on github: aaai-2018-code
- 11/10/17 — I presented our paper at AAAI 2018 in New Orleans. We develop a practical method for bounding policy loss and performing risk-aware policy improvement when learning from demonstration. Presentation slides available as PowerPoint or PDF.
- 11/10/17 — I presented our paper on Probabilistic Safety Bounds for Robot Learning from Demonstration at the AAAI Fall Symposium on AI for HRI.
- 11/9/17 — Our paper on Efficient Probabilistic Performance Bounds for Inverse Reinforcement Learning was accepted to AAAI 2018.
- 7/30/17 — Our team UT Austin Villa won third place in the 2017 Robocup@Home Domestic Standard Platform League in Nagoya, Japan.
I'm currently a postdoc at UC Berkeley working with Anca Dragan and Ken Goldberg.
I recently finished my PhD in computer science at UT Austin where I was advised by Scott Niekum. I'm interested in safe reward inference and inverse reinforcement learning. In particular, I'm working on developing methods that allow a robot or other autonomous agent to provide high-confidence bounds on performance when learning a policy from a limited number of demonstrations, ask risk-aware questions to better resolve ambiguities and learn safe policies from human demonstrations, learn more efficiently from demonstrations that are informative, extrapolate demonstrator intent from suboptimal ranked demonstrations, even when rankings are unavailable, and perform fast Bayesian reward inference for visual control tasks.