Distribution-free binary classification: prediction sets, confidence intervals and calibration

From MaRDI portal
Publication:6343213

arXiv2006.10564MaRDI QIDQ6343213

Author name not available (Why is that?)

Publication date: 18 June 2020

Abstract: We study three notions of uncertainty quantification -- calibration, confidence intervals and prediction sets -- for binary classification in the distribution-free setting, that is without making any distributional assumptions on the data. With a focus towards calibration, we establish a 'tripod' of theorems that connect these three notions for score-based classifiers. A direct implication is that distribution-free calibration is only possible, even asymptotically, using a scoring function whose level sets partition the feature space into at most countably many sets. Parametric calibration schemes such as variants of Platt scaling do not satisfy this requirement, while nonparametric schemes based on binning do. To close the loop, we derive distribution-free confidence intervals for binned probabilities for both fixed-width and uniform-mass binning. As a consequence of our 'tripod' theorems, these confidence intervals for binned probabilities lead to distribution-free calibration. We also derive extensions to settings with streaming data and covariate shift.




Has companion code repository: https://github.com/aigen/df-posthoc-calibration








This page was built for publication: Distribution-free binary classification: prediction sets, confidence intervals and calibration

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6343213)