• Classified by Topic • Classified by Publication Type • Sorted by Date • Sorted by First Author Last Name • Classified by Funding Source •
Dyna-LfLH: Learning Agile Navigation in Dynamic Environments from Learned Hallucination.
Saad Abdul Ghani, Zizhao
Wang, Peter Stone, and and Xuesu
Xiao.
In International Conference on Intelligent Robots and Systems, October 2025.
This paper introduces Dynamic Learning from Learned Hallucination (Dyna-LfLH), aself-supervised method for training motion planners to navigate environments withdense and dynamic obstacles. Classical planners struggle with dense,unpredictable obstacles due to limited computation, while learning-based plannersface challenges in acquiring high-quality demonstrations for imitation learningor dealing with exploration inefficiencies in reinforcement learning. Building onLearning from Hallucination (LfH), which synthesizes training data from pastsuccessful navigation experiences in simpler environments, Dyna-LfLH incorporatesdynamic obstacles by generating them through a learned latent distribution. Thisenables efficient and safe motion planner training. We evaluate Dyna-LfLH on aground robot in both simulated and real environments, achieving up to a 25 percentimprovement in success rate compared to baselines.
@InProceedings{dyna_lflh_icra_2025,
author = {Saad Abdul Ghani and Zizhao Wang and Peter Stone and and Xuesu Xiao},
title = {Dyna-LfLH: Learning Agile Navigation in Dynamic Environments from Learned Hallucination},
booktitle = {International Conference on Intelligent Robots and Systems},
year = {2025},
month = {October},
location = {Hangzhou, CHINA},
abstract = {This paper introduces Dynamic Learning from Learned Hallucination (Dyna-LfLH), a
self-supervised method for training motion planners to navigate environments with
dense and dynamic obstacles. Classical planners struggle with dense,
unpredictable obstacles due to limited computation, while learning-based planners
face challenges in acquiring high-quality demonstrations for imitation learning
or dealing with exploration inefficiencies in reinforcement learning. Building on
Learning from Hallucination (LfH), which synthesizes training data from past
successful navigation experiences in simpler environments, Dyna-LfLH incorporates
dynamic obstacles by generating them through a learned latent distribution. This
enables efficient and safe motion planner training. We evaluate Dyna-LfLH on a
ground robot in both simulated and real environments, achieving up to a 25 percent
improvement in success rate compared to baselines.
},
}
Generated by bib2html.pl (written by Patrick Riley ) on Sat Nov 15, 2025 21:30:11