Explainability of Deep Models

The concept of explainability has several facets and the need for explainability is strong in safety-critical applications such as autonomous driving. We investigate methods providing post-hoc explanations to black-box systems, and approaches to directly design more interpretable models.

Selected publications

  1. Pegah Khayatan*, Mustafa Shukor*, Jayneel Parekh*, Matthieu Cord
    ICCV, 2025

  2. Eslam Abdelrahman, Liangbing Zhao, Vincent Tao Hu, Matthieu Cord, Patrick Perez, Mohamed Elhoseiny
    ICLR, 2025

  3. Éloi Zablocki*, Valentin Gerard*, Amaia Cardiel, Eric Gaussier, Matthieu Cord, Eduardo Valle
    under review, 2025
  4. Jayneel Parekh, Pegah Khayatan, Mustafa Shukor, Alasdair Newson, Matthieu Cord
    NeurIPS, 2024

  5. Folco Bertini Baldassini, Mustafa Shukor, Matthieu Cord, Laure Soulier, Benjamin Piwowarski
    CVPR Workshop on Prompting in Vision, 2024
  6. Mehdi Zemni, Mickaël Chen, Éloi Zablocki, Hédi Ben-Younes, Patrick Pérez, Matthieu Cord
    CVPR, 2023
  7. Paul Jacob, Éloi Zablocki, Hédi Ben-Younes, Mickaël Chen, Patrick Pérez, Matthieu Cord
    ECCV, 2022

  8. Éloi Zablocki*, Hédi Ben-Younes*, Patrick Pérez, Matthieu Cord
    IJCV, 2022

  9. Hédi Ben-Younes*, Éloi Zablocki*, Patrick Pérez, Matthieu Cord
    Pattern Recognition, 2022