DEALIO: Data-Efficient Adversarial Learning for Imitation from Observation (2021)
In imitation learning from observation (IfO), a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator. Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms. This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk. In this work, we hypothesize that we can incorporate ideas from model-based reinforcement learning with adversarial methods for IfO in order to increase the data efficiency of these methods without sacrificing performance. Specifically, we consider time-varying linear Gaussian policies, and propose a method that integrates the linear-quadratic regulator with path integral policy improvement into an existing adversarial IfO framework. The result is a more data-efficient IfO algorithm with better performance, which we show empirically in four simulation domains: using far fewer interactions with the environment, the proposed method exhibits similar or better performance than the existing technique.
View:
PDF
Citation:
In Proceedings of The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, September 2021.
Bibtex:

Peter Stone Faculty pstone [at] cs utexas edu
Faraz Torabi Ph.D. Student faraztrb [at] cs utexas edu
Garrett Warnell Research Scientist warnellg [at] cs utexas edu