Skip to main content

The BRAVO workshop presents a unique opportunity for researchers, industry experts, and policymakers to come together and address the critical challenge of trustworthy validation for autonomous vehicle systems on open roads.

The advances in artificial intelligence and computer vision propel the rise of highly automated ADAS and AVs, with the potential to revolutionize transportation and mobility services. However, deploying data-driven safety-critical systems with limited onboard resources and enduring guarantees on open roads remains a significant challenge.

To ensure safe deployment, ADAS/AVs must demonstrate the ability to navigate a wide range of driving conditions, including rare and dangerous situations, severe perturbations, and even adversarial attacks. Additionally, those capabilities must be ascertained to regulatory bodies, to secure certification, and to users, to earn their confidence.

The BRAVO workshop seeks to foster collaboration and innovation in developing tools and testbeds for assessing and enhancing the robustness, generalization power, transparency, and verification of computer vision models for ADAS/AVs. By working together, we can contribute to a safer, more efficient, and sustainable future for transportation.

We invite you to join us at the BRAVO workshop to explore solutions and contribute to developing reliable, robust computer vision for autonomous vehicles. Together, we can shape the future of transportation, ensuring safety and efficiency for all road users.

Keynote Speakers

Program

All quoted times refer to CEST.

8:45 - 9:00
Opening remarks
9:00 - 9:45
Invited talk #1: “Open-world Scene Understanding with Intuitive Priors” by Raoul de Charette
9:45 - 10:30
Invited talk #2: “Real World End-to-End Learnt Driving Models — an Invitation” by Jamie Shotton
10:30 - 11:15
Poster session #1 + Coffee break
11:15 - 12:00
Invited talk #3: “3D Open World: Generalize and Recognize Novelty” by Tatiana Tommasi
12:00 - 12:45
Invited talk #4: “How to Safely Handle Out-of-Distribution Data in the Open World: Challenges, Methods, and Path Forward” by Sharon Yixuan Li
12:45 - 13:45
Lunch break
13:45 - 14:15
Spotlight presentations:
GPS-GLASS: Learning Nighttime Semantic Segmentation Using Daytime Video and GPS data
T-FFTRadNet: Object Detection with Swin Vision Transformers from Raw ADC Radar Signals
An Empirical Analysis of Range for 3D Object Detection
14:15 - 14:45
BRAVO Challenge
14:45 - 15:30
Invited talk #5: “Fake it till you Make It: Can Synthetic Data Improve Model Robustness” by Kate Saenko
15:30 - 16:15
Poster session# 2 + Coffee break
16:15 - 17:00
Invited talk #6: “Efficient and Effective Certification for Street Scene Segmentation” by Mario Fritz
17:00 - 17:45
Panel discussion + Q&A
17:45 - 17:55
Closing remarks

Plase check the conference attendance details in advance, including the room assignments for the workshops.

Accepted Works

Workshop proceedings at TheCVF Open Access, IEEE Computer Society, and IEEE Xplore.

Poster session #1 (morning):

Poster session #2 (afternoon):

Reviewers

We extend our warmest thanks to the team of reviewers who made this call for contributions possible:

Adrien LafageENSTA Paris
Alexandre BoulchValeo.ai
Alexandre RaméLIP6
Antoine SaportaMeero
Antonin VobeckyValeo.ai / CTU, FEE / CIIRC
Arthur OuaknineMcGill University / Mila
CĂ©dric RommelValeo.ai
Charles CorbiereValeo.ai
David HurychValeo.ai
Dmitry KanginLancaster University
Eduard ZamfirUniversity of Wurzburg
Emanuel AldeaParis-Saclay University
Emilie WirbelNvidia
Fabio ArnezUniversité Paris-Saclay, CEA, List
Fabio PizzatiUniversity of Oxford
Fredrik GustafssonUppsala University
Himalaya JainHelsing
Krzysztof LisEPFL
LoĂŻck ChambonValeo.ai
Matej GrcićUniversity of Zagreb
Matthieu CordValeo.ai / Sorbonne University
Maximilian JaritzAmazon
Mickael ChenValeo.ai
Nazir NayalKoç University
Olivier LaurentUniversité Paris-Saclay
Oriane SiméoniValeo.ai
Patrick PĂ©rezValeo.ai
Pau de Jorge ArandaUniversity of Oxford
Raffaello CamorianoPolitecnico di Torino
Raoul de CharetteInria
Renaud MarletValeo.ai / École des Ponts ParisTech
Riccardo VolpiNaver Labs
Spyros GidarisValeo.ai
Suha KwakPOSTECH

...and three other reviewers who preferred to remain anonymous.

Call for Contributions

We invite participants to submit their work to the BRAVO Workshop as full papers or extended abstracts.

Full-Paper Submissions

Full papers must present original research, not published elsewhere, and follow the ICCV main conference format with a length of 4 to 8 pages (extra pages with references only are allowed). Supplemental materials are not allowed. Accepted full papers will be included in the conference proceedings.

Extended Abstract Submissions

We welcome extended abstracts, which may serve works of a more speculative or preliminary nature that may not be fit for a full-length paper. Authors are also welcome to submit extended abstracts for previously or concomitantly published works that could foster the workshop objectives.

Extended abstracts must have no more than 1000 words, in addition to a single illustration and references. We suggest authors use the extended abstract template provided.

Accepted extended abstracts will be presented without inclusion in the proceedings.

Topics of Interest

The workshop welcomes submissions on all topics related to robustness, generalization, transparency, and verification of computer vision for autonomous driving systems. Topics of interest include but are not limited to:

  1. Robustness & Domain Generalization
  2. Domain Adaptation & Shift
  3. Long-tail Recognition
  4. Perception in Adverse Conditions
  5. Out-of-distribution Detection
  6. Applications of Uncertainty Quantification
  7. Monitoring, Failure Prediction & Anomaly Detection
  8. Confidence Calibration
  9. Image Enhancement Techniques

Guidelines

All submissions must be made through the CMT system, before the deadline.

The BRAVO Workshop reviewing is double-blind. Authors of all submissions must follow the main conference policy on anonymity. We encourage authors to follow the ICCV 2023 Suggested Practices for Authors, except in what concerns supplemental material, which is not allowed.

While we encourage reproducibility, we welcome preliminary/speculative works where source codes or data might need more time until broad disclosure. We still expect evidence of ethics clearance if the submission uses novel data sources from human subjects.

BRAVO Workshop reviewers must follow the ICCV 2023 Ethics Guidelines for Reviewers. We encourage reviewers to follow the ICCV 2023 Reviewer Guidelines, and Tips to Write Good Reviews.

Camera-ready instructions

The submission guidelines are detailed here.

Posters

We will organize two poster sessions, in the morning and afternoon, inside the workshop room. All accepted works will be assigned to one of the poster sessions, including those selected for the oral spotlights.

The poster size for workshops differs from the main conference's. The panel size will be 95.4 cm wide x 138.8 cm tall (aspect ratio 0.69:1). A0 paper in portrait orientation will fit the panel with some margin.

The ICCV organizers partnered with an on-site printing service from which you may collect your printed poster: more information at the main conference attendance info site.

Important Dates

2023-07-20 Thu
Contributed submissions deadline (23:59 GMT)
2023-08-03 Thu
Acceptance of contributions announced to authors
2023-08-20 Sun
Full-paper camera-ready submission deadline
2023-09-15 Fri
Extended-abstract final-version submission deadline
2023-10-03 Tue
Workshop day (full day)

BRAVO Challenge

In conjunction with the BRAVO workshop at ICCV'23, we are organizing a challenge on the robustness of autonomous driving in the open world. The 2023 BRAVO Challenge aims at benchmarking segmentation models on urban scenes undergoing diverse forms of natural degradation and realistic-looking synthetic corruptions. We offer two tracks for benchmarking segmentation models: (1) trained on a single dataset and (2) trained on multiple heterogeneous datasets.

General rules

  1. Models in each track must be trained using only the datasets allowed for that track.
  2. It is strictly forbidden to employ generative models for synthetic data augmentation.
  3. All results must be reproducible. Participants must submit a white paper containing comprehensive technical details alongside their results. Participants must make models and inference code accessible.

Track 1 – Single-domain training

In this track, models must be trained exclusively on the published Cityscapes dataset. This track evaluates the robustness of models trained with limited supervision and geographical diversity when facing unexpected corruptions observed in real-world scenarios.

The evaluation will be performed on the 19 semantic classes of Cityscapes.

Track 2 – Multi-domain training

In this track, the models may be trained over a mix of multiple datasets, whose choice is strictly limited to the list provided below, comprising both real and synthetic domains. This track aims to assess how fewer constraints on the training data can enhance robustness.

The evaluation will be performed on the 19 semantic classes of Cityscapes. Participants may choose to maintain the label sets of each dataset or remap them to Cityscapes.

Allowed training datasets for Track 2:

BRAVO Dataset

We created the benchmark dataset with real captured images and realistic-looking augmented images, repurposing existing datasets and combining them with newly generated data. The benchmark dataset comprises images from ACDC, SegmentMeIfYouCan, Out-of-context Cityscapes, and new synthetic data.

Get the full benchmark dataset at the following link: full BRAVO Dataset download link.

The dataset includes the following splits (with individual download links):

bravo-synobjs: augmented scenes with inpainted synthetic OOD objects. We augmented the validation images of Cityscapes and generated 656 images with 26 OOD objects. (download link)

Image 1 Image 2 Image 3 Image 4 Image 5

bravo-synrain: augmented scenes with synthesized raindrops on the camera lens. We augmented the validation images of Cityscapes and generated 500 images with raindrops. (download link)

Image 1 Image 2 Image 3 Image 4 Image 5

bravo-synflare: augmented scenes with synthesized light flares. We augmented the validation images of Cityscapes and generated 308 images with random light flares. (download link)

Image 1 Image 2 Image 3 Image 4 Image 5

bravo-outofcontext: augmented scenes with random backgrounds. We augmented the validation images of Cityscapes and generated 329 images with random random backgrounds. (download link)

Image 1 Image 2 Image 3 Image 4 Image 5

bravo-ACDC: real scenes captured in adverse weather conditions, i.e. fog, night, rain and snow. (download link or directly from ACDC website)

Image 1 Image 2 Image 3 Image 4 Image 5

bravo-SMIYC: real scenes featuring out-of-distribution (OOD) objects rarely encountered on the road. (download link or directly from SMIYC website)

Image 1 Image 2 Image 3 Image 4 Image 5

Metrics

For a comprehensive assessment of the robustness of various semantic segmentation models, we adopt the following metrics:

Benchmark server

We are excited to unveil the BRAVO Challenge as an initiative within ELSA — European Lighthouse on Secure and Safe AI, a network of excellence funded by the European Union. The BRAVO Challenge is officially featured on the ELSA Benchmarks website as the Autonomous Driving/Robust Perception task.

Please refer to the task website for detailed submission information on the submission format and schedule.

Submission format

Leaderboard

Coming soon!

Acknowledgements

We extend our heartfelt gratitude to the authors of ACDC, SegmentMeIfYouCan and Out-of-context Cityscapes for generously granting us permission to repurpose their benchmarking data. We are also thankful to the authors of GuidedDisent and Flare Removal for providing the amazing toolboxes that helped synthesize realistic-looking raindrops and light flares. All those people collectively contributed to creating BRAVO, a unified benchmark for robustness in autonomous driving.

Organizers

Supported by

Original photo by Kai Gradert on Unsplash, modified to illustrate stable diffusion augmentations.