• Classified by Topic • Classified by Publication Type • Sorted by Date • Sorted by First Author Last Name • Classified by Funding Source •
Benchmarking Massively Parallelized Multi-Task Reinforcement Learning for Robotics Tasks.
Viraj Joshi, Zifan
Xu, Bo Liu, Peter Stone, and
Amy Zhang.
In Reinforcement Learning Conference (RLC), August 2025.
Multi-task Reinforcement Learning (MTRL) has emerged as a critical trainingparadigm for applying reinforcement learning (RL) to a set of complex real-worldrobotic tasks, which demands a generalizable and robust policy. At the same time,massively parallelized training has gained popularity, not only for significantlyaccelerating data collection through GPU-accelerated simulation but also forenabling diverse data collection across multiple tasks by simulatingheterogeneous scenes in parallel. However, existing MTRL research has largelybeen limited to off-policy methods like SAC in the low-parallelization regime.MTRL could capitalize on the higher asymptotic performance of on-policyalgorithms, whose batches require data from the current policy, and as a result,take advantage of massive parallelization offered by GPU-accelerated simulation.To bridge this gap, we introduce a massively parallelized Multi-Task Benchmarkfor robotics (MTBench), an open-sourced benchmark featuring a broad distributionof 50 manipulation tasks and 20 locomotion tasks, implemented using theGPU-accelerated simulator IsaacGym. MTBench also includes four base RL algorithmscombined with seven state-of-the-art MTRL algorithms and architectures, providinga unified framework for evaluating their performance. Our extensive experimentshighlight the superior speed of evaluating MTRL approaches using MTBench, whilealso uncovering unique challenges that arise from combining massive parallelismwith MTRL. Code is available at https://github.com/Viraj-Joshi/MTBench.
@InProceedings{viraj_joshi_rlc2025, author = {Viraj Joshi and Zifan Xu and Bo Liu and Peter Stone and Amy Zhang}, title = {Benchmarking Massively Parallelized Multi-Task Reinforcement Learning for Robotics Tasks}, booktitle = {Reinforcement Learning Conference (RLC)}, year = {2025}, month = {August}, location = {Edmonton, Canada}, abstract = {Multi-task Reinforcement Learning (MTRL) has emerged as a critical training paradigm for applying reinforcement learning (RL) to a set of complex real-world robotic tasks, which demands a generalizable and robust policy. At the same time, massively parallelized training has gained popularity, not only for significantly accelerating data collection through GPU-accelerated simulation but also for enabling diverse data collection across multiple tasks by simulating heterogeneous scenes in parallel. However, existing MTRL research has largely been limited to off-policy methods like SAC in the low-parallelization regime. MTRL could capitalize on the higher asymptotic performance of on-policy algorithms, whose batches require data from the current policy, and as a result, take advantage of massive parallelization offered by GPU-accelerated simulation. To bridge this gap, we introduce a massively parallelized Multi-Task Benchmark for robotics (MTBench), an open-sourced benchmark featuring a broad distribution of 50 manipulation tasks and 20 locomotion tasks, implemented using the GPU-accelerated simulator IsaacGym. MTBench also includes four base RL algorithms combined with seven state-of-the-art MTRL algorithms and architectures, providing a unified framework for evaluating their performance. Our extensive experiments highlight the superior speed of evaluating MTRL approaches using MTBench, while also uncovering unique challenges that arise from combining massive parallelism with MTRL. Code is available at https://github.com/Viraj-Joshi/MTBench. }, }
Generated by bib2html.pl (written by Patrick Riley ) on Wed Aug 06, 2025 21:39:47