• Classified by Topic • Classified by Publication Type • Sorted by Date • Sorted by First Author Last Name • Classified by Funding Source •
Sample-efficient Adversarial Imitation Learning from Observation.
Faraz
Torabi, Sean Geiger, Garrett Warnell, and Peter
Stone.
In Imitation, Intent, and Interaction (I3) Workshop at ICML 2019, June 2019.
Imitation from observation is the framework of learning tasks by observing demonstrated state-only trajectories. Recently, adversarial approaches have achieved significant performance improvements over other methods for imitating complex behaviors. However, these adversarial imitation algorithms often require many demonstration examples and learning iterations to produce a policy that is successful at imitating a demonstrator's behavior.This high sample complexity often prohibits these algorithms from being deployed on physical robots. In this paper, we propose an algorithm that addresses the sample inefficiency problem by utilizing ideas from trajectory centric reinforcement learning algorithms.We test our algorithm and conduct experiments using an imitation task on a physical robot arm and its simulated version in Gazebo and will show the improvement in learning rate and efficiency.
@InProceedings{ICML19-torabi,
author = {Faraz Torabi and Sean Geiger and Garrett Warnell and Peter Stone},
title = {Sample-efficient Adversarial Imitation Learning from Observation},
booktitle = {Imitation, Intent, and Interaction (I3) Workshop at ICML 2019},
location = {Long Beach, California, USA},
month = {June},
year = {2019},
abstract = {
Imitation from observation is the framework of learning tasks by observing
demonstrated state-only trajectories. Recently, adversarial approaches have
achieved significant performance improvements over other methods for imitating
complex behaviors. However, these adversarial imitation algorithms often
require many demonstration examples and learning iterations to produce a
policy that is successful at imitating a demonstrator's behavior.This high
sample complexity often prohibits these algorithms from being deployed on
physical robots. In this paper, we propose an algorithm that addresses the
sample inefficiency problem by utilizing ideas from trajectory centric
reinforcement learning algorithms.We test our algorithm and conduct
experiments using an imitation task on a physical robot arm and its simulated
version in Gazebo and will show the improvement in learning rate and
efficiency.
},
}
Generated by bib2html.pl (written by Patrick Riley ) on Sat Nov 01, 2025 23:25:02