ICRA 2022 BARN Challenge


Wondering how the state-of-the-art navigation systems work on autonomous navigation in highly-constrained spaces? Check out our detailed report on the results and findings of The BARN Challenge at ICRA 2022 here!


Congrats UT AMRL, UVA AMR, and Temple TRAIL for winning the 1st, 2nd, and 3rd place of The BARN Challenge at ICRA 2022! See you next year!

Physical Example

About

Designing autonomous robot navigation systems has been a topic of interest to the robotics community for decades. Indeed, many existing navigation systems allow robots to move from one point to another in a collision-free manner, which may create the impression that navigation is a solved problem. However, autonomous mobile robots still struggle in many scenarios, e.g., colliding with or getting stuck in novel and tightly constrained spaces. These troublesome scenarios have many real-world implications, such as poor navigation performance in adversarial search and rescue environments, in naturally cluttered daily households, and in congested social spaces such as classrooms, offices, and cafeterias. Meeting these real-world challenges requires systems that can both successfully and efficiently navigate the environment with confidence, posing fundamental challenges to current autonomous systems, from perception to control. Therefore, the Benchmark Autonomous Robot Navigation (BARN) Challenge aims at creating a benchmark for state-of-the-art navigation systems and pushing the boundaries of their performance in these challenging and highly constrained environments.

The Challenge

The BARN Challenge will take place primarily on the simulated BARN dataset and also physically at the conference venue in Philadelphia.

The BARN dataset comprises 300 pre-generated navigation environments, ranging from easy open spaces to difficult highly constrained ones, and an environment generator that can generate novel BARN environments. The task is to navigate a Clearpath Jackal robot from a predefined start to a goal location as quickly as possible without any collision. The Jackal robot will be standardized with a 2D LiDAR, a motor controller with a max speed of 2m/s, and appropriate computational resources. Participants will need to develop navigation systems which consume the standardized LiDAR input, run all computation onboard using the provided resources, and output motion commands to drive the motors. Participants are welcome to use any approaches to tackle the navigation problem, such as using classical sampling-based or optimization-based planners, end-to-end learning, or hybrid approaches. The following infrastructure will be provided by the competition organizers:

  • The 300 pre-generated BARN environments
  • The BARN environment generator to generate novel environments
  • Baseline navigation systems including classical, end-to-end learning, and hybrid approaches
  • A training pipeline running the standardized Jackal robot in Gazebo simulation with Robot Operating System (ROS) Melodic (in Ubuntu 18.04), with the option of being containerized in Docker or Singularity containers for fast and standardized setup and evaluation
  • Parallelization tools to launch batch training/evaluation on computer clusters (e.g., HTCondor)
  • A standardized evaluation pipeline to compete against other navigation systems


Competition Rules

During the competition, another 50 new BARN evaluation environments will be generated, which will not be accessible to the public. Each participating team is required to submit the developed navigation system as a (collection of) launchable ROS node(s). The final performance will be evaluated based on a standardized metric that considers navigation success rate (collision or not reaching the goal count as failure), actual traversal time, and environment difficulty (measured by optimal traversal time). Specially, the score s for navigating each environment i will be computed as

Score

where the indicator 1success is set to 1 if the robot reaches the navigation goal without any collisions, and set to 0 otherwise. AT denotes the actual traversal time, while OT denotes the optimal traversal time, as an indicator of the environment difficulty and measured by the shortest traversal time assuming the robot always travels at its maximum speed (2m/s):

Optimal Traversal Time

The Path Length is provided by the BARN dataset based on Dijkstra's search. The clip function clips the AT within 4OT and 8OT, in order to assure navigating extremely quickly or slowly in easy or difficult environments won't disproportionally scale the score. The overall score of each team is the score averaged over all 50 test BARN environments, with 10 trials in each environment. Higher scores indicate better navigation performance.

During the in-person conference in Philadelphia, physical obstacle courses will be set up at the conference venue using cardboard boxes (see example below). A physical Clearpath Jackal robot will be provided by the competition sponsor, Clearpath Robotics, with the same standard sensor and actuator suites as in the simulation. However, a sim-to-real gap may still exist due to factors such as friction coefficient between physical wheels and venue floor, electric motor impedance and resistance, real-world sensor noise distribution, etc. The top three teams will be invited to compete in the physical competition. In case a team is not able to attend the conference in person, the organizers will run their submitted navigation stack on their behalf on the physical robot at the conference. The team who achieves the highest collision-free navigation success rate and shortest traversal time wins the competition.


Physical Example

Due to the overhead of the simulator in the simulation phase, we do not specifically constrain computation. We will use a computer with Intel Xeon Gold 6342 CPU @ 2.80 GHz to evaluate all submissions. However, during the physical competition, robot onboard computation will be limited to an Intel i3 CPU (i3-9100TE or similar) with 16GB of DDR4 RAM. GPUs are not available considering the small footprint of the LiDAR perception data.



Leaderboard

Real World

Ranking Team Success / Total Trials Code Link
1 UT AMRL 8/9 GitHub
2 UVA AMR 4/9 GitHub
3 Temple TRAIL 2/9 GitHub
Real-world challenge includes three different obstacle courses. Each team has 30 minutes to finish five timed trials. Top three trials are counted. The team that finishes the most successful trials (reaching the goal without any collision) wins. In the case of a tie, the team with the fasted average traversal time wins.

Simulation

Ranking Team Score Comment
1 TRAIL 0.2415 Temple University
2 LfLHb 0.2334 Details
3 AMRL 0.2310 The University of Texas at Austin
4 DRAGON 0.2200 The University of Virginia
5 E-Bandb 0.2053 Details
6 e2eb 0.2042 Details
7 APPLR-DWAb 0.1979 Details
8 Yiyuiii 0.1969 Nanjing University
9 NavBot 0.1733 Indian Institute of Science, Bangalore
10 Fast DWAb 0.1709 Details
11 Default DWAb 0.1627 Details
b denotes baseline.
Last updated: May 30 2022. If you do not see your submission, it is still under evaluation.


Participation

We have standardized the entire navigation pipeline using the BARN dataset in a standardized Singularity container. You should only modify the navigation system, and leave other parts intact. You can develop your navigation system in your local environment or in a container. You will need to upload your code (e.g., to GitHub) and submit a Singularityfile.def file which will access your code. We will build your Singularity container using the Singularityfile.def file you submitted, run the built container for evaluation, and then publish your score on our website. More detailed instructions can be found here.
Please use this Submission Form to submit your navigation stack to be evaluated by our standardized evaluation pipeline.

Citing BARN

If you find BARN useful for your research, please cite the following paper:

@inproceedings{perille2020benchmarking,
title = {Benchmarking Metric Ground Navigation},
author = {Perille, Daniel and Truong, Abigail and Xiao, Xuesu and Stone, Peter},
booktitle = {2020 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)},
year = {2020},
organization = {IEEE}
}


Schedule

Date Event
March 29 2022 Online submission open
April 5 2022 Online leaderboard open
May 22 2022 Online submission closed, top three simulation teams selected
May 25 2022 9:45am-11am Physical Competition Obstacle Course 1
May 25 2022 3pm-5pm Physical Competition Obstacle Course 2
May 26 2022 9:45am-11am Physical Competition Obstacle Course 3
May 26 2022 3pm-5pm Award Ceremony and Open Discussions


Organizers

Xuesu Xiao
Xuesu Xiao
Everyday Robots
GMU / UT Austin
Zifan Xu
Zifan Xu
UT Austin
Zizhao Wang
Zizhao Wang
UT Austin
Yunlong Song
Yunlong Song
University of Zurich
ETH Zurich
Garrett Warnell
Garrett Warnell
US Army Research Lab
UT Austin
Peter Stone
Peter Stone
UT Austin
Sony AI
Tingnan Zhang
Tingnan Zhang
Robotics@Google
Clearpath Robotics
Clearpath Robotics
Competition Sponsor


Contact

For questions, please contact:

Dr. Xuesu Xiao
Department of Computer Science
The University of Texas at Austin
2317 Speedway, Austin, Texas 78712-1757 USA
+1 (512) 471-9765
xiao@cs.utexas.edu