• Classified by Topic • Classified by Publication Type • Sorted by Date • Sorted by First Author Last Name • Classified by Funding Source •
Sample Efficient Myopic Exploration Through Multitask Reinforcement Learning with Diverse Tasks.
Ziping Xu, Zifan
Xu, Runxuan Jiang, Peter Stone, and Ambuj
Tewari.
In International Conference on Learning Representations (ICLR), May 2024.
Multitask Reinforcement Learning (MTRL) approaches have gained increasingattention for its wide applications in many important Reinforcement Learning (RL)tasks. However, while recent advancements in MTRL theory have focused on theimproved statistical efficiency by assuming a shared structure across tasks,exploration--a crucial aspect of RL--has been largely overlooked. This paperaddresses this gap by showing that when an agent is trained on a sufficientlydiverse set of tasks, a generic policy-sharing algorithm with myopic explorationdesign like epsilon-greedy that are inefficient in general can besample-efficient for MTRL. To the best of our knowledge, this is the firsttheoretical demonstration of the "exploration benefits" of MTRL. It may also shedlight on the enigmatic success of the wide applications of myopic exploration inpractice. To validate the role of diversity, we conduct experiments on syntheticrobotic control environments, where the diverse task set aligns with the taskselection by automatic curriculum learning, which is empirically shown to improvesample-efficiency.
@InProceedings{ziping_xu_ICLR2024,
author = {Ziping Xu and Zifan Xu and Runxuan Jiang and Peter Stone and Ambuj Tewari},
title = {Sample Efficient Myopic Exploration Through Multitask Reinforcement Learning with Diverse Tasks},
booktitle = {International Conference on Learning Representations (ICLR)},
year = {2024},
month = {May},
location = {Vienna, Austria},
abstract = {Multitask Reinforcement Learning (MTRL) approaches have gained increasing
attention for its wide applications in many important Reinforcement Learning (RL)
tasks. However, while recent advancements in MTRL theory have focused on the
improved statistical efficiency by assuming a shared structure across tasks,
exploration--a crucial aspect of RL--has been largely overlooked. This paper
addresses this gap by showing that when an agent is trained on a sufficiently
diverse set of tasks, a generic policy-sharing algorithm with myopic exploration
design like epsilon-greedy that are inefficient in general can be
sample-efficient for MTRL. To the best of our knowledge, this is the first
theoretical demonstration of the "exploration benefits" of MTRL. It may also shed
light on the enigmatic success of the wide applications of myopic exploration in
practice. To validate the role of diversity, we conduct experiments on synthetic
robotic control environments, where the diverse task set aligns with the task
selection by automatic curriculum learning, which is empirically shown to improve
sample-efficiency.
},
}
Generated by bib2html.pl (written by Patrick Riley ) on Sat Nov 15, 2025 21:30:12