Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


RLZero: Direct Policy Inference from Language Without In-Domain Supervision

RLZero: Direct Policy Inference from Language Without In-Domain Supervision.
Harshit Sikchi, Siddhant Agarwal, Pranaya Jajoo, Samyak Parajuli, Caleb Chuck, Max Rudolph, Peter Stone, Amy Zhang, and Scott Niekum.
In Conference on Neural Information Processing Systems (NeurIPS, December 2025.

Download

[PDF]1.7MB  

Abstract

The reward hypothesis states that all goals and purposes can be understood as the maximization of a received scalar reward signal. However, in practice, defining such a reward signal is notoriously difficult, as humans are often unable to predict the optimal behavior corresponding to a reward function. Natural language offers an intuitive alternative for instructing reinforcement learning (RL) agents, yet previous language-conditioned approaches either require costly supervision or test-time training given a language instruction. In this work, we present a new approach that uses a pretrained RL agent trained using only unlabeled, offline interactions --- without task-specific supervision or labeled trajectories --- to get zero-shot test-time policy inference from arbitrary natural language instructions. We introduce a framework comprising three steps: imagine, project, and imitate. First, the agent imagines a sequence of observations corresponding to the provided language description using video generative models. Next, these imagined observations are projected into the target environment domain. Finally, an agent pretrained in the target environment with unsupervised RL instantly imitates the projected observation sequence through a closed-form solution. To the best of our knowledge, our method, RLZero, is the first approach to show direct language-to-behavior generation abilities on a variety of tasks and environments without any in-domain supervision. We further show that components of RLZero can be used to generate policies zero-shot from cross-embodied videos, such as those available on YouTube, even for complex embodiments like humanoids.

BibTeX Entry

@InProceedings{sikchirlzero,
  author   = {Harshit Sikchi and Siddhant Agarwal and Pranaya Jajoo and Samyak Parajuli and Caleb Chuck and Max Rudolph and Peter Stone and Amy Zhang and Scott Niekum},
  title    = {RLZero: Direct Policy Inference from Language Without In-Domain Supervision},
  booktitle = {Conference on Neural Information Processing Systems (NeurIPS},
  year     = {2025},
  month    = {December},
  location = {San Deigo, United States},
  abstract = {The reward hypothesis states that all goals and purposes can be understood as the 
maximization of a received scalar reward signal. However, in practice, defining 
such a reward signal is notoriously difficult, as humans are often unable to 
predict the optimal behavior corresponding to a reward function. Natural language 
offers an intuitive alternative for instructing reinforcement learning (RL) 
agents, yet previous language-conditioned approaches either require costly 
supervision or test-time training given a language instruction. In this work, we 
present a new approach that uses a pretrained RL agent trained using only 
unlabeled, offline interactions --- without task-specific supervision or labeled 
trajectories --- to get zero-shot test-time policy inference from arbitrary natural 
language instructions. We introduce a framework comprising three steps: imagine, 
project, and imitate. First, the agent imagines a sequence of observations 
corresponding to the provided language description using video generative models. 
Next, these imagined observations are projected into the target environment 
domain. Finally, an agent pretrained in the target environment with unsupervised 
RL instantly imitates the projected observation sequence through a closed-form 
solution. To the best of our knowledge, our method, RLZero, is the first approach 
to show direct language-to-behavior generation abilities on a variety of tasks 
and environments without any in-domain supervision. We further show that 
components of RLZero can be used to generate policies zero-shot from 
cross-embodied videos, such as those available on YouTube, even for complex 
embodiments like humanoids. 
  },
}

Generated by bib2html.pl (written by Patrick Riley ) on Fri Apr 17, 2026 17:16:21