Fundamental barriers to high-dimensional regression with convex penalties
DOI10.1214/21-AOS2100zbMath1486.62198arXiv1903.10603MaRDI QIDQ2119224
Publication date: 23 March 2022
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1903.10603
convexhigh-dimensional regressionM-estimationpenaltyapproximate message passingcomputational to statistical gaps
Asymptotic properties of parametric estimators (62F12) Estimation in multivariate analysis (62H12) Ridge regression; shrinkage estimators (Lasso) (62J07) Linear regression; mixed models (62J05) Signal theory (characterization, reconstruction, filtering, etc.) (94A12)
Related Items
Uses Software
Cites Work
- Unnamed Item
- SLOPE is adaptive to unknown sparsity and asymptotically minimax
- High dimensional robust M-estimation: asymptotic variance via approximate message passing
- An iterative construction of solutions of the TAP equations for the Sherrington-Kirkpatrick model
- Log-concavity and strong log-concavity: a review
- On the impact of predictor geometry on the performance on high-dimensional ridge-regularized generalized robust regression estimators
- SLOPE-adaptive variable selection via convex optimization
- Notes on computational-to-statistical gaps: predictions using statistical physics
- Fundamental limits of symmetric low-rank matrix estimation
- The convex geometry of linear inverse problems
- On the conditions used to prove oracle results for the Lasso
- Slope meets Lasso: improved oracle bounds and optimality
- The distribution of the Lasso: uniform control over sparse balls and adaptive parameter tuning
- Universality in polytope phase transitions and message passing algorithms
- Optimal transport for applied mathematicians. Calculus of variations, PDEs, and modeling
- Simultaneous analysis of Lasso and Dantzig selector
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- On robust regression with high-dimensional predictors
- Decoding by Linear Programming
- Information, Physics, and Computation
- Compressive Phase Retrieval via Generalized Approximate Message Passing
- Precise Error Analysis of Regularized <inline-formula> <tex-math notation="LaTeX">$M$ </tex-math> </inline-formula>-Estimators in High Dimensions
- Information-Theoretic Limits on Sparsity Recovery in the High-Dimensional and Noisy Setting
- State evolution for approximate message passing with non-separable functions
- Mutual Information and Optimality of Approximate Message-Passing in Random Linear Estimation
- Algorithmic Analysis and Statistical Estimation of SLOPE via Approximate Message Passing
- State evolution for general approximate message passing algorithms, with applications to spatial coupling
- Living on the edge: phase transitions in convex programs with random data
- On the Convergence of Approximate Message Passing With Arbitrary Matrices
- Vector Approximate Message Passing
- A modern maximum-likelihood theory for high-dimensional logistic regression
- Optimal errors and phase transitions in high-dimensional generalized linear models
- Optimization-Based AMP for Phase Retrieval: The Impact of Initialization and $\ell_{2}$ Regularization
- Universality laws for randomized dimension reduction, with applications
- The LASSO Risk for Gaussian Matrices
- Estimation in Gaussian Noise: Properties of the Minimum Mean-Square Error
- Applications of the Lindeberg Principle in Communications and Statistical Learning
- The Dynamics of Message Passing on Dense Graphs, with Applications to Compressed Sensing
- Lower Bounds for Sparse Recovery
- Simultaneous Regression Shrinkage, Variable Selection, and Supervised Clustering of Predictors with OSCAR
This page was built for publication: Fundamental barriers to high-dimensional regression with convex penalties