Undersampling is a Minimax Optimal Robustness Intervention in Nonparametric Classification

From MaRDI portal
Publication:6400205

arXiv2205.13094MaRDI QIDQ6400205

Author name not available (Why is that?)

Publication date: 25 May 2022

Abstract: While a broad range of techniques have been proposed to tackle distribution shift, the simple baseline of training on an extitundersampled balanced dataset often achieves close to state-of-the-art-accuracy across several popular benchmarks. This is rather surprising, since undersampling algorithms discard excess majority group data. To understand this phenomenon, we ask if learning is fundamentally constrained by a lack of minority group samples. We prove that this is indeed the case in the setting of nonparametric binary classification. Our results show that in the worst case, an algorithm cannot outperform undersampling unless there is a high degree of overlap between the train and test distributions (which is unlikely to be the case in real-world datasets), or if the algorithm leverages additional structure about the distribution shift. In particular, in the case of label shift we show that there is always an undersampling algorithm that is minimax optimal. In the case of group-covariate shift we show that there is an undersampling algorithm that is minimax optimal when the overlap between the group distributions is small. We also perform an experimental case study on a label shift dataset and find that in line with our theory, the test accuracy of robust neural network classifiers is constrained by the number of minority samples.




Has companion code repository: https://github.com/niladri-chatterji/undersampling-minimax








This page was built for publication: Undersampling is a Minimax Optimal Robustness Intervention in Nonparametric Classification

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6400205)