Overfitting Can Be Harmless for Basis Pursuit, But Only to a Degree
From MaRDI portal
Publication:6333993
arXiv2002.00492MaRDI QIDQ6333993
Peizhong Ju, Jia Liu, Xiaojun Lin
Publication date: 2 February 2020
Abstract: Recently, there have been significant interests in studying the so-called "double-descent" of the generalization error of linear regression models under the overparameterized and overfitting regime, with the hope that such analysis may provide the first step towards understanding why overparameterized deep neural networks (DNN) still generalize well. However, to date most of these studies focused on the min -norm solution that overfits the data. In contrast, in this paper we study the overfitting solution that minimizes the -norm, which is known as Basis Pursuit (BP) in the compressed sensing literature. Under a sparse true linear regression model with i.i.d. Gaussian features, we show that for a large range of up to a limit that grows exponentially with the number of samples , with high probability the model error of BP is upper bounded by a value that decreases with . To the best of our knowledge, this is the first analytical result in the literature establishing the double-descent of overfitting BP for finite and . Further, our results reveal significant differences between the double-descent of BP and min -norm solutions. Specifically, the double-descent upper-bound of BP is independent of the signal strength, and for high SNR and sparse models the descent-floor of BP can be much lower and wider than that of min -norm solutions.
Has companion code repository: https://github.com/functionadvanced/basis_pursuit_code
This page was built for publication: Overfitting Can Be Harmless for Basis Pursuit, But Only to a Degree
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6333993)