Dian Chen 陈典

I am a third-year PhD student in CS at UT Austin, advised by Prof. Philipp Krähenbühl .

Previously I studied at UC Berkeley majoring in Computer Science and Applied Mathematics, where I worked with Dr. Pulkit Agrawal, Deepak Pathak, Prof. Sergey Levine, Prof. Pieter Abbeel, and Prof. Jitendra Malik as a research assistant in the Berkeley Artificial Intelligence Research (BAIR) Lab.

Email  /  GitHub  /  Scholar

Research

My research interests lie in robotics, computer vision and machine learning including reinforcement learning.


Learning to Drive From a World on Rails
Dian Chen, Vladlen Koltun, Philipp Krähenbühl
(Oral Presentation) International Conference on Computer Vision (ICCV), 2021
website / code / arxiv

We present a model-based RL method for autonomous driving and navigation tasks. The world model is factorized into a passively moving environment, and a compact ego component. Our method significantly simplifies reinforcement learning. It ranks first on the CARLA leaderboard, and outperforms state-of-the-art imitation learning and model-free reinforcement learning on driving tasks. It is also an order of magnitude more sample efficient than model-free RL on the navigation games in the ProcGen benchmark.

Learning by Cheating
Dian Chen, Brady Zhou, Vladlen Koltun, Philipp Krähenbühl
Conference on Robot Learning (CoRL), 2019
website / code / video / arxiv

We present a two-stage imitation learning method for vision-based driving. Our approach achieves 100% success rate on all tasks in the original CARLA benchmark, sets a new record on the NoCrash benchmark, and reduces the frequency of infractions by an order of magnitude compared to the prior state of the art.

Learning Instance Segmentation by Interaction
Deepak Pathak*, Fred Shentu*, Dian Chen*, Pulkit Agrawal*, Trevor Darrell, Sergey Levine, Jitendra Malik (*equal contribution)
Robotics Vision Workshop, Conference on Computer Vision and Pattern Recognition (CVPR), 2018
website / arxiv

We present a robotic system that learns to segment its visual observations into individual objects by experimenting with its environment in a completely self-supervised manner. Our system is at par with the state-of-art instance segmentation algorithm trained with strong supervision.

Zero-Shot Visual Imitation
Deepak Pathak*, Parsa Mahmoudieh*, Michael Luo*, Pulkit Agrawal*, Dian Chen, Fred Shentu, Evan Shelhamer, Jitendra Malik, Alexei Efros, Trevor Darrell (*equal contribution)
(Oral Presentation) International Conference on Learning Representation (ICLR), 2018
website / arxiv

We present a novel skill policy architecture and dynamics consistency loss which extend visual imitation to more complex environments while improving robustness. Experiments results are shown in a robot knot tying task and a first-person visual navigation task.

Combining Self-Supervised Learning and Imitation for Vision-Based Rope Manipulationg
Ashvin Nair*, Dian Chen*, Pulkit Agrawal*, Phillip Isola, Jitendra Malik, Pieter Abbeel, Sergey Levine (*equal contribution)
IEEE International Conference on Robotics and Automation (ICRA), 2017
website / arxiv

We present a system where a robot takes as input a sequence of images of a human manipulating a rope from an initial to goal configuration, and outputs a sequence of actions that can reproduce the human demonstration, using only monocular images as input.

Teaching
CS394D - Deep Learning - Fall 2020
Teaching Assistant
CS395T - Deep Learning Seminar - Fall 2019
Teaching Assistant
CS342 - Neural Networks - Fall 2018
Teaching Assistant
Service
RA-L, IROS 2020-2021, ICRA 2021, ICLR 2021, NeurIPS 2021
Reviewer

template