APPL: Adaptive Planner Parameter Learning (2022)
Xuesu Xiao, Zizhao Wang, Zifan Xu, Bo Liu, abd Gauraang Dhamankar, Anirudh Nair, Garrett Warnell, and Peter Stone
While current autonomous navigation systems allow robots to successfully drive themselves from one point to another in specific environments, they typically require extensive manual parameter re-tuning by human robotics experts in order to function in new environments. Furthermore, even for just one complex environment, a single set of fine-tuned parameters may not work well in different regions of that environment. These problems prohibit reliable mobile robot deployment by non-expert users. As a remedy, we propose Adaptive Planner Parameter Learning (APPL), a machine learning framework that can leverage non-expert human interaction via several modalities – including teleoperated demonstrations, corrective interventions, and evaluative feedback – and also unsupervised reinforcement learning to learn a parameter policy that can dynamically adjust the parameters of classical navigation systems in response to changes in the environment. APPL inherits safety and explainability from classical navigation systems while also enjoying the benefits of machine learning, i.e., the ability to adapt and improve from experience. We present a suite of individual appl methods and also a unifying cycle-oflearning scheme that combines all the proposed methods in a framework that can improve navigation performance through continual, iterative human interaction and simulation training.
View:
PDF
Citation:
Robotics and Autonomous Systems (2022).
Bibtex:

Peter Stone Faculty pstone [at] cs utexas edu
Garrett Warnell Research Scientist warnellg [at] cs utexas edu