Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Skeletal Feature Compensation for Imitation Learning with Embodiment Mismatch

Skeletal Feature Compensation for Imitation Learning with Embodiment Mismatch.
Eddy Hudson, Garrett Warnell, Faraz Torabi, and Peter Stone.
In International Conference on Robotics and Automation (ICRA), May 2022.
Presentation Video

Download

[PDF]2.5MB  

Abstract

Learning from demonstrations in the wild (e.g. YouTube videos) is a tantalizing goal in imitation learning. However, for this goal to be achieved, imitation learning algorithms must deal with the fact that the demonstrators and learners may have bodies that differ from one another. This condition — "embodiment mismatch" — is ignored by many recent imitation learning algorithms. Our proposed imitation learning technique, SILEM (Skeletal feature compensation for Imitation Learning with Embodiment Mismatch), addresses a particular type of embodiment mismatch by introducing a learned affine transform to compensate for differences in the skeletal features obtained from the learner and expert. We create toy domains based on PyBullet’s HalfCheetah and Ant to assess SILEM’s benefits for this type of embodiment mismatch. We also provide qualitative and quantitative results on more realistic problems — teaching simulated humanoid agents, including Atlas from Boston Dynamics, to walk by observing human demonstrations.

BibTeX Entry

@InProceedings{icra22-hudson,
author={Eddy Hudson and Garrett Warnell and Faraz Torabi and Peter Stone},
booktitle={International Conference on Robotics and Automation (ICRA)},
title={Skeletal Feature Compensation for Imitation Learning with Embodiment Mismatch},
month={May},
year={2022},
location={Philadelphia, USA},
abstract={Learning from demonstrations in the wild (e.g. YouTube videos) is a tantalizing goal in imitation learning. However, for this goal to be achieved, imitation learning algorithms must deal with the fact that the demonstrators and learners may have bodies that differ from one another. This condition — "embodiment mismatch" — is ignored by many recent imitation learning algorithms. Our proposed imitation learning technique, SILEM (Skeletal feature compensation for Imitation Learning with Embodiment Mismatch), addresses a particular type of embodiment mismatch by introducing a learned affine transform to compensate for differences in the skeletal features obtained from the learner and expert. We create toy domains based on PyBullet’s HalfCheetah and Ant to assess SILEM’s benefits for this type of embodiment mismatch. We also provide qualitative and quantitative results on more realistic problems — teaching simulated humanoid agents, including Atlas from Boston Dynamics, to walk by observing human demonstrations.},
wwwnote={<a href="https://www.youtube.com/watch?v=Git3ccvCIGA">Presentation Video</a>},
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Apr 17, 2024 18:42:51