UT Austin Villa Publications

Sorted by DateClassified by Publication TypeClassified by TopicSorted by First Author Last Name

Dexterous Legged Locomotion in Confined 3D Spaces with Reinforcement Learning

Zifan Xu, Amir Hossain Raj, Xuesu Xiao, and Peter Stone. Dexterous Legged Locomotion in Confined 3D Spaces with Reinforcement Learning. In IEEE International Conference on Robotics and Automation, May 2024.

Download

[PDF]2.0MB  

Abstract

Recent advances of locomotion controllers utilizing deep reinforcement learning(RL) have yielded impressive results in terms of achieving rapid and robustlocomotion across challenging terrain, such as rugged rocks, non-rigid ground,and slippery surfaces. However, while these controllers primarily addresschallenges underneath the robot, relatively little research has investigatedlegged mobility through confined 3D spaces, such as narrow tunnels or irregularvoids, which impose all-around constraints. The cyclic gait patterns resultedfrom existing RL-based methods to learn parameterized locomotion skillscharacterized by motion parameters, such as velocity and body height, may not beadequate to navigate robots through challenging confined 3D spaces, requiringboth agile 3D obstacle avoidance and robust legged locomotion. Instead, wepropose to learn locomotion skills end-to-end from goal-oriented navigation inconfined 3D spaces. To address the inefficiency of tracking distant navigationgoals, we introduce a hierarchical locomotion controller that combines aclassical planner tasked with planning waypoints to reach a faraway global goallocation, and an RL-based policy trained to follow these waypoints by generatinglow-level motion commands. This approach allows the policy to explore its ownlocomotion skills within the entire solution space and facilitates smoothtransitions between local goals, enabling long-term navigation towards distantgoals. In simulation, our hierarchical approach succeeds at navigating throughdemanding confined 3D environments, outperforming both pure end-to-end learningapproaches and parameterized locomotion skills. We further demonstrate thesuccessful real-world deployment of our simulation-trained controller on a realrobot.

BibTeX

@InProceedings{Xu_ICRA2024,
  author   = {Zifan Xu and Amir Hossain Raj and Xuesu Xiao and Peter Stone},
  title    = {Dexterous Legged Locomotion in Confined 3D Spaces with Reinforcement Learning},
  booktitle = {IEEE International Conference on Robotics and Automation},
  year     = {2024},
  month    = {May},
  location = {Yokohama, Japan},
  abstract = {Recent advances of locomotion controllers utilizing deep reinforcement learning
(RL) have yielded impressive results in terms of achieving rapid and robust
locomotion across challenging terrain, such as rugged rocks, non-rigid ground,
and slippery surfaces. However, while these controllers primarily address
challenges underneath the robot, relatively little research has investigated
legged mobility through confined 3D spaces, such as narrow tunnels or irregular
voids, which impose all-around constraints. The cyclic gait patterns resulted
from existing RL-based methods to learn parameterized locomotion skills
characterized by motion parameters, such as velocity and body height, may not be
adequate to navigate robots through challenging confined 3D spaces, requiring
both agile 3D obstacle avoidance and robust legged locomotion. Instead, we
propose to learn locomotion skills end-to-end from goal-oriented navigation in
confined 3D spaces. To address the inefficiency of tracking distant navigation
goals, we introduce a hierarchical locomotion controller that combines a
classical planner tasked with planning waypoints to reach a faraway global goal
location, and an RL-based policy trained to follow these waypoints by generating
low-level motion commands. This approach allows the policy to explore its own
locomotion skills within the entire solution space and facilitates smooth
transitions between local goals, enabling long-term navigation towards distant
goals. In simulation, our hierarchical approach succeeds at navigating through
demanding confined 3D environments, outperforming both pure end-to-end learning
approaches and parameterized locomotion skills. We further demonstrate the
successful real-world deployment of our simulation-trained controller on a real
robot.
  },
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Apr 17, 2024 18:47:34