Bottom-Up Skill Discovery from Unsegmented Demonstrations for Long-Horizon Robot Manipulation (2022)
Yifeng Zhu, Peter Stone, and Yuke Zhu
We tackle real-world long-horizon robot manipulation tasks through skill discovery. We present a bottom-up approach to learning a library of reusable skills from unsegmented demonstrations and use these skills to synthesize prolonged robot behaviors. Our method starts with constructing a hierarchical task structure from each demonstration through agglomerative clustering. From the task structures of multi-task demonstrations, we identify skills based on the recurring patterns and train goal-conditioned sensorimotor policies with hierarchical imitation learning. Finally, we train a meta controller to compose these skills to solve long-horizon manipulation tasks. The entire model can be trained on a small set of human demonstrations collected within 30 minutes without further annotations, making it amendable to real-world deployment. We systematically evaluated our method in simulation environments and on a real robot. Our method has shown superior performance over state-of-the-art imitation learning methods in multi-stage manipulation tasks. Furthermore, skills discovered from multi-task demonstrations boost the average task success by 8 percents compared to those discovered from individual tasks.
View:
PDF
Citation:
IEEE Robotics and Automation Letters (2022).
Bibtex:

Peter Stone Faculty pstone [at] cs utexas edu