When the unexpected happens, when the weather badly degrades, when a sensor gets blocked, the embarked perception system should diagnose the situation and react accordingly, e.g., by calling an alternative system or the human driver. With this in mind, we investigate ways to improve the robustness of neural nets to input variations, including to adversarial attacks, and to predict automatically the performance and the confidence of their predictions as in ConfidNet at NeurIPS’19.


Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation

Victor Besnier, Andrei Bursuc, Alexandre Briot, and David Picard
International Conference on Computer Vision (ICCV), 2021

StyleLess layer: Improving robustness for real-world driving

Julien Rebut, Andrei Bursuc, and Patrick Pérez
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021

Confidence Estimation via Auxiliary Models

Charles Corbière, Nicolas Thome, Antoine Saporta, Tuan-Hung Vu, Matthieu Cord, and Patrick Pérez
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021

TRADI: Tracking deep neural network weight distributions

Gianni Franchi, Andrei Bursuc, Emanuel Aldea, Severine Dubuisson, and Isabelle Bloch
European Conference on Computer Vision (ECCV), 2020

Addressing Failure Prediction by Learning Model Confidence

Charles Corbière, Nicolas Thome, Avner Bar-Hen, Matthieu Cord, and Patrick Pérez
Neural Information Processing Systems (NeurIPS), 2019