Reliably quantifying the confidence of deep neural classifiers is a challenging yet fundamental requirement for deployingsuch models in safety-critical applications. In this paper, we introduce a novel target criterion for model confidence, namely the true class probability (TCP). We show that TCP offers better properties for confidence estimation than standard maximum class probability (MCP). Since the true class is by essence unknown at test time, we propose to learn TCP criterion from data with an auxiliary model, introducing a specific learning scheme adapted to this context. We evaluate our approach on the task of failure prediction and ofself-training with pseudo-labels for domain adaptation, which both necessitate effective confidence estimates. Extensive experiments are conducted for validating the relevance of the proposed approach in each task. We study various network architectures andexperiment with small and large datasets for image classification and semantic segmentation. In every tested benchmark, our approach outperforms strong baselines
@article{corbiere2021confidence, author={Corbiere, Charles and Thome, Nicolas and Saporta, Antoine and Vu, Tuan-Hung and Cord, Matthieu and Perez, Patrick}, journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, title={Confidence Estimation via Auxiliary Models}, year={2021}, volume={}, number={}, pages={1-1}, doi={10.1109/TPAMI.2021.3085983}} }