Universality of regularized regression estimators in high dimensions
From MaRDI portal
Publication:6183759
DOI10.1214/23-aos2309arXiv2206.07936MaRDI QIDQ6183759
Publication date: 4 January 2024
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2206.07936
universalityridge regressionrandom matrix theoryrobust regressionLassohigh-dimensional asymptoticsGaussian comparison inequalitiesLindeberg's principle
Random matrices (probabilistic aspects) (60B20) Approximations to statistical distributions (nonasymptotic) (62E17) Functional limit theorems; invariance principles (60F17)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On asymptotically optimal confidence regions and tests for high-dimensional models
- High dimensional robust M-estimation: asymptotic variance via approximate message passing
- On the impact of predictor geometry on the performance on high-dimensional ridge-regularized generalized robust regression estimators
- A generalization of the Lindeberg principle
- Estimation of the mean of a multivariate normal distribution
- Fundamental limits of symmetric low-rank matrix estimation
- High-dimensional asymptotics of prediction: ridge regression and classification
- Robust regression: Asymptotics, conjectures and Monte Carlo
- Weak convergence and empirical processes. With applications to statistics
- Debiasing the Lasso: optimal sample size for Gaussian designs
- Universality of approximate message passing algorithms
- Second-order Stein: SURE for SURE and other applications in high-dimensional inference
- The distribution of the Lasso: uniform control over sparse balls and adaptive parameter tuning
- Central limit theorem and bootstrap approximation in high dimensions: near \(1/\sqrt{n}\) rates via implicit smoothing
- Fundamental barriers to high-dimensional regression with convex penalties
- Approximate message passing algorithms for rotationally invariant matrices
- Surprises in high-dimensional ridgeless least squares interpolation
- De-biasing the Lasso with degrees-of-freedom adjustment
- A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers
- High-dimensional central limit theorems by Stein's method
- Universality in polytope phase transitions and message passing algorithms
- Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
- On robust regression with high-dimensional predictors
- Hypothesis Testing in High-Dimensional Regression Under the Gaussian Random Design Model: Asymptotic Theory
- Precise Error Analysis of Regularized <inline-formula> <tex-math notation="LaTeX">$M$ </tex-math> </inline-formula>-Estimators in High Dimensions
- Generalisation error in learning with random features and the hidden manifold model*
- Learning curves of generic features maps for realistic datasets with a teacher-student model*
- The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve
- Does SLOPE outperform bridge regression?
- A model of double descent for high-dimensional binary linear classification
- MEAN FIELD ASYMPTOTICS IN HIGH-DIMENSIONAL STATISTICS: FROM EXACT RESULTS TO EFFICIENT ALGORITHMS
- A modern maximum-likelihood theory for high-dimensional logistic regression
- Optimal errors and phase transitions in high-dimensional generalized linear models
- Universality laws for randomized dimension reduction, with applications
- The LASSO Risk for Gaussian Matrices
- Applications of the Lindeberg Principle in Communications and Statistical Learning
- The Dynamics of Message Passing on Dense Graphs, with Applications to Compressed Sensing
- Robust Estimation of a Location Parameter
- Ridge Regression: Biased Estimation for Nonorthogonal Problems
- Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models
- Mean Field Models for Spin Glasses
- Mean Field Models for Spin Glasses
- Ridge regression and asymptotic minimax estimation over spheres of growing dimension
- Nearly optimal central limit theorem and bootstrap approximations in high dimensions
- Debiasing convex regularized estimators and interval estimation in linear models