Benchmark for Autonomous Robot Navigation (BARN)

BARN Dataset


About

Do you have a navigation system for mobile robots that you're interested in benchmarking against other approaches? Are you interested in precisely characterizing which types of environments it can handle smoothly, and which causes it more problems?

If so, Benchmark for Autonomous Robot Navigation (BARN) is designed for you. BARN is characterized by

  • Highly-cluttered obstacle configurations representative of challenges or adversaries in real-world navigation
  • Benchmark navigation performance of the entire sense-plan-act(-learn) pipeline
  • Customizable for your robot and a pre-generated suite of 300 navigation environments
  • Easily instantiatable in the physical world
  • Benchmark results of baseline navigation systems

The BARN dataset (presented in the Benchmarking Metric Ground Navigation paper) provides a suite of simulation environments to test collision-free mobile robot navigation in highly-cluttered environments. BARN focuses on testing a mobile robot's low-level motion skills (i.e. how to navigate), instead of task-level decision-making (i.e. where to navigate). These environments cover a wide variety of metric navigation difficulties, ranging from relative open spaces to extremely cluttered environments, where robots need to squeeze through dense obstacles without collisions. These difficulties represent challenging and adversarial environments for autonomous navigation in the real-world, e.g., post-disaster scenarios, such as search and rescue missions, and cause problems for the state-of-the-art navigation systems.

The entire sense-plan-act(-learn) pipeline of your navigation system is benchmarked, rather than focusing on one individual component. BARN provides extensive, objective, statistically significant benchmark results for your navigation system. BARN can also be used as a training environment for learning-based navigation.

BARN is customizable for your robot's specific size, while we provide 300 pre-generated environments for small-sized Unmanned Ground Vehicles, e.g. a ClearPath Jackal robot. We also provide a set of difficulty metrics to test your navigation system's sensitivity to different navigation difficulty levels.

BARN is easily instantiatable in the physical environment with simple objects (e.g. cardboard boxes) and can test sim-to-real transfer of your navigation system, either using classical or learning approaches.

We provide benchmark results for a sampling-based (Dynamic Window Approach) and an optimization-based (Elastic Bands) motion planner as baseline navigation systems.




Usage

Download

To use the dataset, download the dataset at this link. Within the dataset folder, there are folders for the Gazebo .world files, occupancy grid representations, C-space representations, the difficulty metrics for the environment, the paths through the environment, and pgm/yaml files for ROS map_server.

Running Simulations

Simulations can be run on Ubuntu 18.04 with ROS Melodic using a Jackal robot. To run your own simulations on BARN, first download ROS Melodic on the ROS Website, then install all Jackal-related packages using these instructions. This ROS package can then be used to run simulations in Gazebo.

Customizing BARN for Your Own Robot

Our dataset is configured for a Jackal robot (508 x 430 x 250 mm). However, we have included source code to allow you to customize the dataset for your specific robot's size. With the dataset and the source code, you can create Gazebo world files with the same configurations as the original dataset but with a different scale, or you can generate completely new configurations with different robot footprints, grid sizes and cylinder sizes.

Citing BARN

If you find BARN useful for your research, please cite the following paper:

@inproceedings{perille2020benchmarking,
title = {Benchmarking Metric Ground Navigation},
author = {Perille, Daniel and Truong, Abigail and Xiao, Xuesu and Stone, Peter},
booktitle = {2020 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)},
year = {2020},
organization = {IEEE}
}


Examples

Below are some examples of how the BARN dataset can be used to evaluate new navigation systems.

APPLR

Read the APPLR paper here.
@inproceedings{xu2021applr,
title={APPLR: Adaptive Planner Parameter Learning from Reinforcement},
author={Xu, Zifan and Dhamankar, Gauraang and Nair, Anirudh and Xiao, Xuesu and Warnell, Garrett and Liu, Bo and Wang, Zizhao and Stone, Peter},
booktitle = {2021 IEEE International Conference on Robotics and Automation (ICRA)},
year={2021},
organization = {IEEE}
}

APPLI

Read the APPLI paper here.
@inproceedings{wang2021appli,
title={APPLI: Adaptive Planner Parameter Learning From Interventions},
author={Wang, Zizhao and Xiao, Xuesu and Liu, Bo and Warnell, Garrett and Stone, Peter},
booktitle = {2021 IEEE International Conference on Robotics and Automation (ICRA)},
year={2021},
organization = {IEEE}
}

Agile Robot Navigation through Hallucinated Learning and Sober Deployment (HLSD)

Read the HLSD paper here.
@inproceedings{xiao2021agile,
title={Agile Robot Navigation through Hallucinated Learning and Sober Deployment},
author={Xiao, Xuesu and Liu, Bo and Stone, Peter},
booktitle = {2021 IEEE International Conference on Robotics and Automation (ICRA)},
year={2021},
organization = {IEEE}
}

Toward Agile Maneuvers in Highly Constrained Spaces: Learning from Hallucination (LfH)

Read the LfH paper here.
@article{xiao2021toward,
title={Toward agile maneuvers in highly constrained spaces: Learning from hallucination},
author={Xiao, Xuesu and Liu, Bo and Warnell, Garrett and Stone, Peter},
journal={IEEE Robotics and Automation Letters},
volume={6},
number={2},
pages={1503--1510},
year={2021},
publisher={IEEE}
}



Contact

For questions, please contact:

Dr. Xuesu Xiao
Department of Computer Science
The University of Texas at Austin
2317 Speedway, Austin, Texas 78712-1757 USA
+1 (512) 471-9765
xiao@cs.utexas.edu