Frugal learning

Collecting diverse enough data, and annotating it precisely, is complex, costly and time-consuming. To reduce dramatically these needs, we explore various alternatives to fully-supervised learning, e.g, training that is unsupervised (as rOSD at ECCCV’20), self-supervised (as BoWNet at CVPR’20), semi-supervised, active, zero-shot (as ZS3 at NeurIPS’19) or few-shot. We also investigate training with fully-synthetic data (in combination with unsupervised domain adaptation) and with GAN-augmented data.

Publications

Toward Unsupervised, Multi-Object Discovery in Large-Scale Image Collections

Huy V. Vo, Patrick Pérez and Jean Ponce
European Conference on Computer Vision (ECCV), 2020


Learning Representations by Predicting Bags of Visual Words

Spyros Gidaris, Andrei Bursuc, Nikos Komodakis, Patrick Pérez, and Matthieu Cord
Computer Vision and Pattern Recognition (CVPR), 2020


This dataset does not exist: training models from generated images

Victor Besnier, Himalaya Jain, Andrei Bursuc, Matthieu Cord, and Patrick Pérez
International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2020


Zero-Shot Semantic Segmentation

Maxime Bucher, Tuan Hung Vu, Matthieu Cord, and Patrick Pérez
Neural Information Processing Systems (NeurIPS), 2019


Boosting Few-Shot Visual Learning With Self-Supervision

Spyros Gidaris, Andrei Bursuc, Nikos Komodakis, Patrick Pérez, and Matthieu Cord
International Conference on Computer Vision (ICCV), 2019


Unsupervised Image Matching and Object Discovery as Optimization

Huy V. Vo, Francis Bach, Minsu Cho, Kai Han, Yann Lecun, Patrick Pérez and Jean Ponce
Computer Vision and Pattern Recognition (CVPR), 2019