The BRAVO workshop presents a unique opportunity for researchers, industry experts, and
policymakers to come together and address the critical challenge of trustworthy validation for
autonomous vehicle systems on open roads.
The advances in artificial intelligence and computer vision propel the rise of highly automated
ADAS and AVs, with the potential to revolutionize transportation
and mobility services. However, deploying data-driven safety-critical systems with limited onboard
resources and enduring guarantees on open roads remains a significant challenge.
To ensure safe deployment, ADAS/AVs must demonstrate the ability to navigate a wide range of driving
conditions, including rare and dangerous situations, severe perturbations, and even adversarial
attacks. Additionally, those capabilities must be ascertained to regulatory bodies, to secure
certification, and to users, to earn their confidence.
The BRAVO workshop seeks to foster collaboration and innovation in developing tools and testbeds for
assessing and enhancing the robustness, generalization power, transparency, and verification of
computer vision models for ADAS/AVs. By working together, we can contribute to a safer, more
efficient, and sustainable future for transportation.
We invite you to join us at the BRAVO workshop to explore solutions and contribute to developing
reliable, robust computer vision for autonomous vehicles. Together, we can shape the future of
transportation, ensuring safety and efficiency for all road users.
Program
All quoted times refer to CEST.
8:45 - 9:00
Opening remarks
9:00 - 9:45
Invited talk #1: “Open-world Scene Understanding with Intuitive Priors” by
Raoul de Charette
9:45 - 10:30
Invited talk #2: “Real World End-to-End Learnt Driving Models — an
Invitation” by Jamie Shotton
10:30 - 11:15
Poster session #1 + Coffee break
11:15 - 12:00
Invited talk #3: “3D Open World: Generalize and Recognize Novelty” by Tatiana
Tommasi
12:00 - 12:45
Invited talk #4: “How to Safely Handle Out-of-Distribution Data in the Open
World: Challenges, Methods, and Path Forward” by Sharon Yixuan Li
12:45 - 13:45
Lunch break
13:45 - 14:15
Spotlight presentations:
GPS-GLASS: Learning Nighttime Semantic Segmentation Using Daytime Video and
GPS data
T-FFTRadNet: Object Detection with Swin Vision Transformers from Raw ADC
Radar Signals
An Empirical Analysis of Range for 3D Object Detection
14:15 - 14:45
BRAVO Challenge
14:45 - 15:30
Invited talk #5: “Fake it till you Make It: Can Synthetic Data Improve Model
Robustness” by Kate Saenko
15:30 - 16:15
Poster session# 2 + Coffee break
16:15 - 17:00
Invited talk #6: “Efficient and Effective Certification for Street Scene
Segmentation” by Mario Fritz
17:00 - 17:45
Panel discussion + Q&A
17:45 - 17:55
Closing remarks
Plase check the conference attendance details in advance, including the room
assignments for the workshops.
Accepted Works
Poster session #1 (morning):
- A glimpse at the first results of the AutoBehave project: a multidisciplinary approach to
evaluate the usage of our travel time in self-driving cars. Carlos F Crispim-Junior, Romain
Guesdon, Christophe Jallais, Florent Laroche, Stephanie Souche-Le Corvec, Georges Beurier, Xuguang
Wang, Laure Tougne Rodet. (Abstract)
- Anomaly-Aware Semantic Segmentation via Style-Aligned OoD Augmentation. Dan Zhang, Kaspar
Sakmann, William Beluch, Robin Hutmacher, Yumeng Li. (Full Paper)
- Camera-Based Road Snow Coverage Estimation. Kai Cordes, Hellward Broszio. (Full Paper)
- Deep Ensembles Spread Over Time – Enabling Deep Ensembles in Real-Time Applications. Isak P
Meding, Alexander Bodin, Adam Tonderski, Joakim Johnander, Christoffer Petersson, Lennart Svensson.
(Full Paper)
- On the Interplay of Convolutional Padding and Adversarial Robustness. Paul Gavrikov, Janis
Keuper. (Full Paper)
- Synthetic Dataset Acquisition for a Specific Target Domain. Joshua Niemeijer, Sudhanshu
Mittal, Thomas Brox. (Full Paper)
- Unsupervised Domain Adaptation for Self-Driving from Past Traversal Features. Travis Zhang,
Katie Z Luo, Cheng Perng Phoo, Yurong You, Mark Campbell, Bharath Hariharan, Kilian Weinberger.
(Full Paper)
- What Does Really Count? Estimating Relevance of Corner Cases for Semantic Segmentation in
Automated Driving. Jasmin Breitenstein, Florian Heidecker, Maria Lyssenko, Daniel Bogdoll,
Maarten Bieshaar, Marius Zöllner, Bernhard Sick, Tim Fingscheidt. (Full Paper)
Poster session #2 (afternoon):
- A Subdomain-Specific Knowledge Distillation Method for Unsupervised Domain Adaptation in Adverse
Weather Conditions. Yejin Lee, Gyuwon Choi, Donggon Jang, Daeshik Kim (Abstract)
- An Empirical Analysis of Range for 3D Object Detection. Neehar Peri, Mengtian Li, Benjamin
Wilson, Yu-Xiong Wang, James Hays, Deva Ramanan. (Full Paper)
- Fusing Pseudo Labels with Weak Supervision for Dynamic Traffic Scenarios. Harshith Mohan
Kumar, Sean Lawrence. (Abstract)
- GPS-GLASS: Learning Nighttime Semantic Segmentation Using Daytime Video and GPS data.
Hongjae Lee, Changwoo Han, Jun-Sang Yoo, Seung-Won Jung. (Full Paper)
- Identifying Systematic Errors in Object Detectors with the SCROD Pipeline. Valentyn
Boreiko, Matthias Hein, Jan Hendrik Metzen. (Full Paper)
- Introspection of 2D Object Detection using Processed Neural Activation Patterns in Automated
Driving Systems. Hakan Y Yatbaz, Mehrdad Dianati, Konstantinos Koufos, Roger Woodman. (Full
Paper)
- On Offline Evaluation of 3D Object Detection for Autonomous Driving. Tim Schreier, Katrin
Renz, Andreas Geiger, Kashyap Chitta. (Full Paper)
- Sensitivity analysis of AI-based algorithms for autonomous driving on optical wavefront
aberrations induced by the windshield. Dominik W Wolf, Markus Ulrich, Nikhil Kapoor. (Full
Paper)
- T-FFTRadNet: Object Detection with Swin Vision Transformers from Raw ADC Radar Signals.
James Giroux, Martin Bouchard, Robert Laganiere. (Full Paper)
Reviewers
We extend our warmest thanks to the team of reviewers who made this call for contributions possible:
Adrien LafageENSTA
Paris
Alexandre
BoulchValeo.ai
Alexandre
RaméLIP6
Antoine
SaportaMeero
Antonin
VobeckyValeo.ai / CTU, FEE / CIIRC
Arthur
OuaknineMcGill University / Mila
Cédric
RommelValeo.ai
Charles
CorbiereValeo.ai
David
HurychValeo.ai
Dmitry
KanginLancaster University
Eduard
ZamfirUniversity of Wurzburg
Emanuel
AldeaParis-Saclay University
Emilie
WirbelNvidia
Fabio
ArnezUniversité Paris-Saclay, CEA, List
Fabio
PizzatiUniversity of Oxford
Fredrik
GustafssonUppsala University
Himalaya
JainHelsing
Krzysztof
LisEPFL
LoĂŻck
ChambonValeo.ai
Matej
GrcićUniversity of Zagreb
Matthieu
CordValeo.ai / Sorbonne University
Maximilian
JaritzAmazon
Mickael
ChenValeo.ai
Nazir NayalKoç
University
Olivier
LaurentUniversité Paris-Saclay
Oriane
SiméoniValeo.ai
Patrick
PérezValeo.ai
Pau de Jorge
ArandaUniversity of Oxford
Raffaello
CamorianoPolitecnico di Torino
Raoul de
CharetteInria
Renaud
MarletValeo.ai / École des Ponts ParisTech
Riccardo VolpiNaver
Labs
Spyros
GidarisValeo.ai
Suha
KwakPOSTECH
...and three other reviewers who preferred to remain anonymous.
Call for Contributions
We invite participants to submit their work to the BRAVO Workshop as full papers or extended abstracts.
Full-Paper Submissions
Full papers must present original research, not published elsewhere, and follow the ICCV
main conference format with a length of 4 to 8 pages (extra pages with references only are
allowed). Supplemental materials are not allowed. Accepted full papers will be included in the
conference proceedings.
Extended Abstract Submissions
We welcome extended abstracts, which may serve works of a more speculative or preliminary nature that may
not be fit for a full-length paper. Authors are also welcome to submit extended abstracts for previously
or concomitantly published works that could foster the workshop objectives.
Extended abstracts must have no more than 1000 words, in addition to a single illustration and
references. We suggest authors use the extended
abstract template provided.
Accepted extended abstracts will be presented without inclusion in the proceedings.
Topics of Interest
The workshop welcomes submissions on all topics related to robustness, generalization, transparency, and
verification of computer vision for autonomous driving systems. Topics of interest include but are not
limited to:
- Robustness & Domain Generalization
- Domain Adaptation & Shift
- Long-tail Recognition
- Perception in Adverse Conditions
- Out-of-distribution Detection
- Applications of Uncertainty Quantification
- Monitoring, Failure Prediction & Anomaly Detection
- Confidence Calibration
- Image Enhancement Techniques
Guidelines
All submissions must be made through the CMT system, before the deadline.
The BRAVO Workshop reviewing is double-blind. Authors of all submissions must
follow the main
conference policy on anonymity. We encourage authors to follow the ICCV 2023 Suggested Practices for Authors, except in what concerns supplemental
material, which is not allowed.
While we encourage reproducibility, we welcome preliminary/speculative works where source codes or data
might need more time until broad disclosure. We still expect evidence of ethics clearance if the
submission uses novel data sources from human subjects.
BRAVO Workshop reviewers must follow the ICCV 2023 Ethics Guidelines for Reviewers. We encourage reviewers to follow the
ICCV 2023
Reviewer Guidelines, and Tips to Write Good Reviews.
Camera-ready instructions
The submission guidelines are detailed here.
Posters
We will organize two poster sessions, in the morning and afternoon, inside the workshop room. All
accepted works will be assigned to one of the poster sessions, including those selected for the oral
spotlights.
The poster size for workshops differs from the main conference's. The panel
size will be 95.4 cm wide x 138.8 cm tall (aspect ratio 0.69:1). A0 paper in portrait orientation will fit
the panel with some margin.
The ICCV organizers partnered with an on-site printing service from which you may collect your printed
poster: more information at the
main conference attendance info site.
Important Dates
2023-07-20 Thu
Contributed submissions deadline (23:59 GMT)
2023-08-03 Thu
Acceptance of contributions announced to authors
2023-08-20 Sun
Full-paper camera-ready submission deadline
2023-09-15 Fri
Extended-abstract final-version submission deadline
2023-10-03 Tue
Workshop day (full day)
BRAVO Challenge
In conjunction with the BRAVO workshop at ICCV'23, we are organizing a challenge on the robustness of
autonomous driving in the open world. The 2023 BRAVO Challenge aims at benchmarking segmentation models
on urban scenes undergoing diverse forms of natural degradation and realistic-looking synthetic
corruptions. We offer two tracks for benchmarking segmentation models: (1) trained on a single dataset
and (2) trained on multiple heterogeneous datasets.
General rules
- Models in each track must be trained using only the datasets allowed for that track.
- It is strictly forbidden to employ generative models for synthetic data augmentation.
- All results must be reproducible. Participants must submit a white paper containing comprehensive
technical details alongside their results. Participants must make models and inference code
accessible.
Track 1 – Single-domain training
In this track, models must be trained exclusively on the published
Cityscapes dataset. This track
evaluates the
robustness of models trained with limited supervision and geographical diversity when facing unexpected
corruptions observed in real-world scenarios.
The evaluation will be performed on the 19 semantic classes of Cityscapes.
Track 2 – Multi-domain training
In this track, the models may be trained over a mix of multiple datasets, whose choice is strictly
limited to the list provided below, comprising both real and synthetic domains. This track aims to
assess how fewer constraints on the training data can enhance robustness.
The evaluation will be performed on the 19 semantic classes of
Cityscapes. Participants may choose to
maintain
the label sets of each dataset or remap them to Cityscapes.
Allowed training datasets for Track 2:
BRAVO Dataset
We created the benchmark dataset with real captured images and realistic-looking augmented images,
repurposing existing datasets and combining them with newly generated data. The benchmark dataset
comprises images from ACDC,
SegmentMeIfYouCan,
Out-of-context Cityscapes, and new
synthetic data.
Get the full benchmark dataset at the following link:
full BRAVO
Dataset download link.
The dataset includes the following splits (with individual download links):
bravo-synobjs: augmented scenes with inpainted synthetic OOD objects. We augmented the
validation images of Cityscapes and generated 656 images with 26 OOD objects. (download
link)
bravo-synrain: augmented scenes with synthesized raindrops on the camera lens. We
augmented the validation images of Cityscapes and generated 500 images with raindrops. (download
link)
bravo-synflare: augmented scenes with synthesized light flares. We augmented the
validation images of Cityscapes and generated 308 images with random light flares. (download
link)
bravo-outofcontext: augmented scenes with random backgrounds. We augmented the
validation images of Cityscapes and generated 329 images with random random backgrounds. (download
link)
bravo-ACDC: real scenes captured in adverse weather conditions, i.e. fog, night, rain
and snow. (download
link or directly from ACDC
website)
bravo-SMIYC: real scenes featuring out-of-distribution (OOD) objects rarely encountered
on the road. (download
link or directly from SMIYC
website)
Metrics
For a comprehensive assessment of the robustness of various semantic segmentation models, we adopt the
following metrics:
- mIoU: mean Intersection Over Union, quantifying the degree of overlap between correct
predictions and actual labels against the total number of true positives, false positives, and false
negatives. Evaluated splits: bravo-ACDC, bravo-synrain, bravo-synflare, bravo-outofcontext.
- ECE: Expected Calibration Error, measuring the expected difference between accuracy and
predicted uncertainty. Evaluated splits: bravo-ACDC, bravo-synrain, bravo-synflare,
bravo-outofcontext.
- AUPR-Success: Area Under the Precision-Recall curve Error, computing the area under the
Precision-Recall curve using semantic prediction successes as the positive class. Evaluated splits:
bravo-ACDC, bravo-synrain, bravo-synflare, bravo-outofcontext. The evaluation code should resemble:
sklearn.metrics.precision_recall_curve(pred==label, conf)
.
- AUPR-Error: Area Under the Precision-Recall curve Error, computing the area under the
Precision-Recall curve using semantic prediction failures as the positive class. Evaluated splits:
bravo-ACDC, bravo-synrain, bravo-synflare, bravo-outofcontext. The evaluation code should resemble:
sklearn.metrics.precision_recall_curve(pred!=label, -conf)
.
- AUROC-ood: Area Under the ROC Curve, a threshold-free metric quantifying the probability
that a randomly chosen certain example will be ranked higher than a randomly chosen uncertain one.
Evaluated splits: bravo-ACDC, bravo-synrain, bravo-synflare, bravo-synobjs, bravo-SMIYC. The
evaluation code should resemble:
sklearn.metrics.roc_curve(ood_label, -conf)
.
- AUPR-ood: Area Under the ROC Curve, a threshold-free metric quantifying the probability
that a randomly chosen certain example will be ranked higher than a randomly chosen uncertain one.
Evaluated splits: bravo-ACDC, bravo-synrain, bravo-synflare, bravo-synobjs, bravo-SMIYC. The
evaluation code should resemble:
sklearn.metrics.precision_recall_curve(ood_label, -conf)
.
- FPR@95TPR-ood: measuring the False Positive Rate when setting the True Positive Rate to
95%. Evaluated splits: bravo-ACDC, bravo-synrain, bravo-synflare, bravo-synobjs, bravo-SMIYC.
Benchmark server
We are excited to unveil the BRAVO Challenge as an initiative within
ELSA — European Lighthouse on Secure and Safe AI,
a network of excellence funded by the European Union. The BRAVO Challenge is officially featured on the
ELSA Benchmarks website as
the Autonomous Driving/Robust Perception task.
Please refer to the task
website for detailed submission information on the submission format and schedule.
Submission format
- A single tar archive with the same structure of the bravo dataset:
-
bravo_ACDC
-
bravo_synrain
- ...
- For each input image "ori_name.png", we requires two corresponding files: "ori_name_pred.png" for
the semantic prediction and "ori_name_conf.png" for the confidence level regarding the model's
predictions."
- Semantic predictions
pred
must be 8-bit grayscale .png
images
(numpy.uint8
). Predictions must be encoded in Cityscapes 19-class format, e.g., road
should correspond to ID 0. List of 19 Cityscapes
['road', 'sidewalk', 'building', 'wall', 'fence', 'pole', 'traffic light', 'traffic sign', 'vegetation', 'terrain', 'sky', 'person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle','bicycle']
.
Please refer to cityscapes.json for the
19-class mapping.
- Confidence maps
conf
must be 16-bit grayscale .png
.webp images
(numpy.uint16
), where a value of 0.0
corresponds to confidence
0
and a value of 65535
100
corresponds to confidence 1.0
. The same confidence map conf
is used for
all metrics.
The python code for saving confidence should resemble
cv2.imwrite(conf_webp_file, conf)
[cv2.IMWRITE_WEBP_QUALITY, 100]
- Example submission.
Leaderboard
Coming soon!
Acknowledgements
We extend our heartfelt gratitude to the authors of
ACDC,
SegmentMeIfYouCan and
Out-of-context Cityscapes for generously
granting us
permission to repurpose their benchmarking data. We are also thankful to the authors of
GuidedDisent and
Flare Removal
for providing the amazing toolboxes that helped synthesize realistic-looking raindrops and light
flares. All those people collectively contributed to creating BRAVO, a unified benchmark
for robustness in autonomous driving.
Original photo by Kai Gradert on Unsplash, modified to illustrate stable diffusion augmentations.