On the robustness of minimum norm interpolators and regularized empirical risk minimizers
From MaRDI portal
Publication:2091842
DOI10.1214/22-AOS2190WikidataQ114060454 ScholiaQ114060454MaRDI QIDQ2091842
Sara van de Geer, Matthias Löffler, Geoffrey Chinot
Publication date: 2 November 2022
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2012.00807
interpolationregularizationsparse linear regressionbasis pursuittrace regressionminimum norm interpolation
Related Items
Tractability from overparametrization: the example of the negative perceptron, AdaBoost and robust one-bit compressed sensing
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Geometric inference for general high-dimensional linear inverse problems
- Oracle inequalities and optimal inference under group sparsity
- Instance-optimality in probability with an \(\ell _1\)-minimization decoder
- Minimax ridge regression estimation
- Regularization and the small-ball method. I: Sparse recovery
- Adaptive estimation of a quadratic functional by model selection.
- The convex geometry of linear inverse problems
- AdaBoost and robust one-bit compressed sensing
- Surprises in high-dimensional ridgeless least squares interpolation
- A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers
- Just interpolate: kernel ``ridgeless regression can generalize
- A general framework for Bayes structured linear models
- Stability and instance optimality for Gaussian measurements in compressed sensing
- Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation
- Simultaneous analysis of Lasso and Dantzig selector
- Stability and robustness of \(\ell_1\)-minimizations with Weibull matrices and redundant dictionaries
- Smallest singular value of random matrices and geometry of random polytopes
- Concentration Inequalities
- Mathematical Foundations of Infinite-Dimensional Statistical Models
- Bounding the Smallest Singular Value of a Random Matrix Without Concentration
- Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
- Atomic Decomposition by Basis Pursuit
- Regularization and the small-ball method II: complexity dependent error rates
- Robustness to Unknown Error in Sparse Regularization
- On the geometry of polytopes generated by heavy-tailed random vectors
- The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve
- Benign overfitting in linear regression
- Reconciling modern machine-learning practice and the classical bias–variance trade-off
- Modern regularization methods for inverse problems
- The Power of Convex Relaxation: Near-Optimal Matrix Completion
- A Simpler Approach to Matrix Completion
- Model Selection and Estimation in Regression with Grouped Variables
- Stable signal recovery from incomplete and inaccurate measurements
- Ridge Regression: Biased Estimation for Nonorthogonal Problems