• Classified by Topic • Classified by Publication Type • Sorted by Date • Sorted by First Author Last Name • Classified by Funding Source •
RLZero: Direct Policy Inference from Language Without In-Domain Supervision.
Harshit Sikchi, Siddhant Agarwal, Pranaya
Jajoo, Samyak Parajuli, Caleb Chuck, Max Rudolph, Peter Stone, Amy Zhang, and
Scott Niekum.
In Conference on Neural Information Processing Systems
(NeurIPS, December 2025.
The reward hypothesis states that all goals and purposes can be understood as themaximization of a received scalar reward signal. However, in practice, definingsuch a reward signal is notoriously difficult, as humans are often unable topredict the optimal behavior corresponding to a reward function. Natural languageoffers an intuitive alternative for instructing reinforcement learning (RL)agents, yet previous language-conditioned approaches either require costlysupervision or test-time training given a language instruction. In this work, wepresent a new approach that uses a pretrained RL agent trained using onlyunlabeled, offline interactions—without task-specific supervision or labeledtrajectories—to get zero-shot test-time policy inference from arbitrary naturallanguage instructions. We introduce a framework comprising three steps: imagine,project, and imitate. First, the agent imagines a sequence of observationscorresponding to the provided language description using video generative models.Next, these imagined observations are projected into the target environmentdomain. Finally, an agent pretrained in the target environment with unsupervisedRL instantly imitates the projected observation sequence through a closed-formsolution. To the best of our knowledge, our method, RLZero, is the first approachto show direct language-to-behavior generation abilities on a variety of tasksand environments without any in-domain supervision. We further show thatcomponents of RLZero can be used to generate policies zero-shot fromcross-embodied videos, such as those available on YouTube, even for complexembodiments like humanoids.
@InProceedings{sikchirlzero,
author = {Harshit Sikchi and Siddhant Agarwal and Pranaya Jajoo and Samyak Parajuli and Caleb Chuck and Max Rudolph and Peter Stone and Amy Zhang and Scott Niekum},
title = {RLZero: Direct Policy Inference from Language Without In-Domain Supervision},
booktitle = {Conference on Neural Information Processing Systems (NeurIPS},
year = {2025},
month = {December},
location = {San Deigo, United States},
abstract = {The reward hypothesis states that all goals and purposes can be understood as the
maximization of a received scalar reward signal. However, in practice, defining
such a reward signal is notoriously difficult, as humans are often unable to
predict the optimal behavior corresponding to a reward function. Natural language
offers an intuitive alternative for instructing reinforcement learning (RL)
agents, yet previous language-conditioned approaches either require costly
supervision or test-time training given a language instruction. In this work, we
present a new approach that uses a pretrained RL agent trained using only
unlabeled, offline interactionsâwithout task-specific supervision or labeled
trajectoriesâto get zero-shot test-time policy inference from arbitrary natural
language instructions. We introduce a framework comprising three steps: imagine,
project, and imitate. First, the agent imagines a sequence of observations
corresponding to the provided language description using video generative models.
Next, these imagined observations are projected into the target environment
domain. Finally, an agent pretrained in the target environment with unsupervised
RL instantly imitates the projected observation sequence through a closed-form
solution. To the best of our knowledge, our method, RLZero, is the first approach
to show direct language-to-behavior generation abilities on a variety of tasks
and environments without any in-domain supervision. We further show that
components of RLZero can be used to generate policies zero-shot from
cross-embodied videos, such as those available on YouTube, even for complex
embodiments like humanoids.
},
}
Generated by bib2html.pl (written by Patrick Riley ) on Mon Feb 23, 2026 19:28:57