Zero-shot and open-world learning

Our research in open-world perception focuses on developing models capable of recognizing and adapting to novel objects and scenarios, ensuring safe and reliable performance in the ever-changing real world.

Selected publications

  1. 2025

  2. DIP: Unsupervised Dense In-Context Post-training of Visual Representations
    Sophia Sirko-Galouchenko, Antonin Vobecky, Andrei Bursuc, Nicolas Thome, Spyros Gidaris
    ICCV 2025
  3. 2024

  4. CLIP-DINOiser: Teaching CLIP a few DINO tricks for open-vocabulary semantic segmentation
    Monika Wysoczańska, Oriane Siméoni, Michaël Ramamonjisoa, Andrei Bursuc, Tomasz Trzciński, Patrick Pérez
    ECCV 2024
  5. SPOT: Self-Training with Patch-Order Permutation for Object-Centric Learning with Autoregressive Transformers
    Ioannis Kakogeorgiou, Spyros Gidaris, Konstantinos Karantzalos, and Nikos Komodakis
    CVPR 2024highlight
  6. CLIP-DIY: CLIP Dense Inference Yields Open-Vocabulary Semantic Segmentation For-Free
    Monika Wysoczańska, Michaël Ramamonjisoa, Tomasz Trzciński, Oriane Siméoni
    WACV 2024
  7. 2023

  8. PØDA: Prompt-driven Zero-shot Domain Adaptation
    Mohammad Fahes, Tuan-Hung Vu, Andrei Bursuc, Patrick Pérez, Raoul de Charette
    ICCV 2023
  9. 2021

  10. Generative Zero-Shot Learning for Semantic Segmentation of 3D Point Clouds
    Bjoern Michele, Alexandre Boulch, Gilles Puy, Maxime Bucher, and Renaud Marlet
    3DV 2021
  11. 2019

  12. Zero-Shot Semantic Segmentation
    Maxime Bucher, Tuan-Hung Vu, Matthieu Cord, and Patrick Pérez
    NeurIPS 2019