Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Coopernaut: End-to-End Driving with Cooperative Perception for Networked Vehicles

Coopernaut: End-to-End Driving with Cooperative Perception for Networked Vehicles.
Jiaxun Cui, Hang Qiu, Dian Chen, Peter Stone, and Yuke Zhu.
In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022.
Project website

Download

[PDF]3.5MB  

Abstract

Optical sensors and learning algorithms for autonomous vehicles have dramatically advanced in the past few years. Nonetheless, the reliability of today's autonomous vehicles is hindered by the limited line-of-sight sensing capability and the brittleness of data-driven methods in handling extreme situations. With recent developments of telecommunication technologies, cooperative perception with vehicle-to-vehicle communications has become a promising paradigm to enhance autonomous driving in dangerous or emergency situations. We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving. Our model encodes LiDAR information into compact point-based representations that can be transmitted as messages between vehicles via realistic wireless channels. To evaluate our model, we develop AUTOCASTSIM, a network-augmented driving simulation framework with example accident-prone scenarios. Our experiments on AUTOCASTSIM suggest that our cooperative perception driving models lead to a 40% improvement in average success rate over egocentric driving models in these challenging driving situations and a 5x smaller bandwidth requirement than prior work V2VNet

BibTeX Entry

@InProceedings{CVPR22-cui,
  author = {Jiaxun Cui and Hang Qiu and Dian Chen and Peter Stone and Yuke Zhu},
  title = {Coopernaut: End-to-End Driving with Cooperative Perception for Networked Vehicles},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  location = {New Orleans, LA, USA},
  month = {June},
  year = {2022},
  abstract = { 
              Optical sensors and learning algorithms for autonomous
              vehicles have dramatically advanced in the past few
              years.  Nonetheless, the reliability of today's
              autonomous vehicles is hindered by the limited
              line-of-sight sensing capability and the brittleness of
              data-driven methods in handling extreme situations. With
              recent developments of telecommunication technologies,
              cooperative perception with vehicle-to-vehicle
              communications has become a promising paradigm to
              enhance autonomous driving in dangerous or emergency
              situations. We introduce COOPERNAUT, an end-to-end
              learning model that uses cross-vehicle perception for
              vision-based cooperative driving. Our model encodes
              LiDAR information into compact point-based
              representations that can be transmitted as messages
              between vehicles via realistic wireless channels. To
              evaluate our model, we develop AUTOCASTSIM, a
              network-augmented driving simulation framework with
              example accident-prone scenarios.  Our experiments on
              AUTOCASTSIM suggest that our cooperative perception
              driving models lead to a 40% improvement in average
              success rate over egocentric driving models in these
              challenging driving situations and a 5x smaller
              bandwidth requirement than prior work V2VNet },
wwwnote={<a href="https://ut-austin-rpl.github.io/Coopernaut/">Project website</a>}, 
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Apr 17, 2024 18:42:51