Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Task Phasing: Automated Curriculum Learning from Demonstrations

Task Phasing: Automated Curriculum Learning from Demonstrations.
Vaibhav Bajaj, Guni Sharon, and Peter Stone.
In Proceedings of the 33rd International Conference on Automated Planning and Scheduling (ICAPS 2023), July 2023.
Accompanying code

Download

[PDF]416.0kB  

Abstract

Applying reinforcement learning (RL) to sparse reward domains is notoriously challenging due to insufficient guiding signals. Common RL techniques for addressing such domains include (1) learning from demonstrations and (2) curriculum learning. While these two approaches have been studied in detail, they have rarely been considered together. This paper aims to do so by introducing a principled task phasing approach that uses demonstrations to automatically generate a curriculum sequence. Using inverse RL from (subopti- mal) demonstrations we define a simple initial task. Our task phasing approach then provides a framework to gradually increase the complexity of the task all the way to the target task, while retuning the RL agent in each phasing iteration. Two approaches for phasing are considered: (1) gradually increasing the proportion of time steps an RL agent is in control, and (2) phasing out a guiding informative reward function. We present conditions that guarantee the convergence of these approaches to an optimal policy. Experimental results on 3 sparse reward domains demonstrate that our task phasing approaches outperform state-of-the-art approaches with respect to asymptotic performance.

BibTeX Entry

@InProceedings{ICAPS23-bajaj,
  author = {Vaibhav Bajaj and Guni Sharon and Peter Stone},
  title = {Task Phasing: Automated Curriculum Learning from Demonstrations},
  booktitle = {Proceedings of the 33rd International Conference on Automated Planning and Scheduling (ICAPS 2023)},
  location = {Prague, Czech Republic},
  month = {July},
  year = {2023},
  abstract = {  
              Applying reinforcement learning (RL) to sparse reward
              domains is notoriously challenging due to insufficient
              guiding signals. Common RL techniques for addressing
              such domains include (1) learning from demonstrations
              and (2) curriculum learning. While these two approaches
              have been studied in detail, they have rarely been
              considered together. This paper aims to do so by
              introducing a principled task phasing approach that uses
              demonstrations to automatically generate a curriculum
              sequence. Using inverse RL from (subopti- mal)
              demonstrations we define a simple initial task. Our task
              phasing approach then provides a framework to gradually
              increase the complexity of the task all the way to the
              target task, while retuning the RL agent in each phasing
              iteration. Two approaches for phasing are considered:
              (1) gradually increasing the proportion of time steps an
              RL agent is in control, and (2) phasing out a guiding
              informative reward function. We present conditions that
              guarantee the convergence of these approaches to an
              optimal policy. Experimental results on 3 sparse reward
              domains demonstrate that our task phasing approaches
              outperform state-of-the-art approaches with respect to
              asymptotic performance.
  },
  wwwnote={Accompanying <a href="https://github.com/ParanoidAndroid96/Task-Phasing.git">code</a>},
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Apr 17, 2024 18:42:50