Driving in action

Getting from sensory inputs to car control goes either through a modular stack (perception > localization > forecast > planning > actuation) or, more radically, through a single end-to-end model. We work on both strategies, more specificaly on action forecasting, automatic interpretation of decisions taken by a driving system, and reinforcement / imitation learning for end-to-end systems (as in RL work at CVPR’20).


PLOP: Probabilistic poLynomial Objects trajectory Prediction for autonomous driving

Thibault Buhet, Emilie Wirbel, Andrei Bursuc and Xavier Perrotton
Conference on Robot Learning (CoRL), 2020

End-to-End Model-Free Reinforcement Learning for Urban Driving using Implicit Affordances

Marin Toromanoff, Emilie Wirbel, and Fabien Moutarde
Computer Vision and Pattern Recognition (CVPR), 2020