Uncertainty Sets for Image Classifiers using Conformal Prediction

From MaRDI portal
Publication:6350165

arXiv2009.14193MaRDI QIDQ6350165

Author name not available (Why is that?)

Publication date: 29 September 2020

Abstract: Convolutional image classifiers can achieve high predictive accuracy, but quantifying their uncertainty remains an unresolved challenge, hindering their deployment in consequential settings. Existing uncertainty quantification techniques, such as Platt scaling, attempt to calibrate the network's probability estimates, but they do not have formal guarantees. We present an algorithm that modifies any classifier to output a predictive set containing the true label with a user-specified probability, such as 90%. The algorithm is simple and fast like Platt scaling, but provides a formal finite-sample coverage guarantee for every model and dataset. Our method modifies an existing conformal prediction algorithm to give more stable predictive sets by regularizing the small scores of unlikely classes after Platt scaling. In experiments on both Imagenet and Imagenet-V2 with ResNet-152 and other classifiers, our scheme outperforms existing approaches, achieving coverage with sets that are often factors of 5 to 10 smaller than a stand-alone Platt scaling baseline.




Has companion code repository: https://github.com/aangelopoulos/conformal-classification








This page was built for publication: Uncertainty Sets for Image Classifiers using Conformal Prediction

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6350165)