RIDM: Reinforced Inverse Dynamics Modeling for Learning from a Single Observed Demonstration (2020)
Augmenting reinforcement learning with imitation learning is often hailed as a method by which to improve upon learning from scratch. However, most existing methods for integrating these two techniques are subject to several strong assumptions---chief among them that information about demonstrator actions is available. In this paper, we investigate the extent to which this assumption is necessary by introducing and evaluating reinforced inverse dynamics modeling (RIDM), a novel paradigm for combining imitation from observation (IfO) and reinforcement learning with no dependence on demonstrator action information. Moreover, RIDM requires only a single demonstration trajectory and is able to operate directly on raw (unaugmented) state features. We find experimentally that RIDM performs favorably compared to a baseline approach for several tasks in simulation as well as for tasks on a real UR5 robot arm. Experiment videos can be found at https://sites.google.com/view/ridm-reinforced-inverse-dynami.
View:
PDF
Citation:
IEEE Robotics and Automation Letters, presented at International Conference on Intelligent Robots and Systems (IROS) (2020).
Bibtex:

Josiah Hanna Ph.D. Student jphanna [at] cs utexas edu
Peter Stone Faculty pstone [at] cs utexas edu
Faraz Torabi Ph.D. Student faraztrb [at] cs utexas edu
Garrett Warnell Research Scientist warnellg [at] cs utexas edu