Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Sample-efficient Adversarial Imitation Learning from Observation

Sample-efficient Adversarial Imitation Learning from Observation.
Faraz Torabi, Sean Geiger, Garrett Warnell, and Peter Stone.
In Imitation, Intent, and Interaction (I3) Workshop at ICML 2019, June 2019.

Download

[PDF]6.1MB  

Abstract

Imitation from observation is the framework of learning tasks by observing demonstrated state-only trajectories. Recently, adversarial approaches have achieved significant performance improvements over other methods for imitating complex behaviors. However, these adversarial imitation algorithms often require many demonstration examples and learning iterations to produce a policy that is successful at imitating a demonstrator's behavior.This high sample complexity often prohibits these algorithms from being deployed on physical robots. In this paper, we propose an algorithm that addresses the sample inefficiency problem by utilizing ideas from trajectory centric reinforcement learning algorithms.We test our algorithm and conduct experiments using an imitation task on a physical robot arm and its simulated version in Gazebo and will show the improvement in learning rate and efficiency.

BibTeX Entry

@InProceedings{ICML19-torabi,
  author = {Faraz Torabi and Sean Geiger and Garrett Warnell and Peter Stone},
  title = {Sample-efficient Adversarial Imitation Learning from Observation},
  booktitle = {Imitation, Intent, and Interaction (I3) Workshop at ICML 2019},
  location = {Long Beach, California, USA},
  month = {June},
  year = {2019},
  abstract = {
Imitation from observation is the framework of learning tasks by observing 
demonstrated state-only trajectories. Recently, adversarial approaches have 
achieved significant performance improvements over other methods for imitating 
complex behaviors. However, these adversarial imitation algorithms often 
require many demonstration examples and learning iterations to produce a 
policy that is successful at imitating a demonstrator's behavior.This high 
sample complexity often prohibits these algorithms from being deployed on 
physical robots. In this paper, we propose an algorithm that addresses the 
sample inefficiency problem by utilizing ideas from trajectory centric 
reinforcement learning algorithms.We test our algorithm and conduct 
experiments using an imitation task on a physical robot arm and its simulated 
version in Gazebo and will show the improvement in learning rate and 
efficiency.
  },
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Apr 17, 2024 18:42:57