Multi-sensor perception

Automated driving relies first on a diverse range of sensors, like Valeo’s fish-eye cameras, LiDARs, radars and ultrasonics. Exploiting at best the outputs of each of these sensors at any instant is fundamental to understand the complex environment of the vehicle and gain robustness. To this end, we explore various machine learning approaches where sensors are considered either in isolation (as radar in Carrada at ICPR’20) or collectively (as in xMUDA at CVPR’20).

Publications

PLOP: Probabilistic poLynomial Objects trajectory Prediction for autonomous driving

Thibault Buhet, Emilie Wirbel, Andrei Bursuc and Xavier Perrotton
Conference on Robot Learning (CoRL), 2020


Dynamic Task Weighting Methods for Multi-task Networks in Autonomous Driving Systems

Isabelle Leang, Ganesh Sistu, Fabian Burger, Andrei Bursuc, and Senthil Yogamani
IEEE International Conference on Intelligent Transportation Systems (ITSC), 2020


xMUDA: Cross-Modal Unsupervised Domain Adaptation for 3D Semantic Segmentation

Maximilian Jaritz, Tuan-Hung Vu, Raoul de Charette, Émilie Wirbel, and Patrick Pérez
Computer Vision and Pattern Recognition (CVPR), 2020