Debiasing convex regularized estimators and interval estimation in linear models
DOI10.1214/22-aos2243arXiv1912.11943OpenAlexW3203574187MaRDI QIDQ6117025
No author found.
Publication date: 19 July 2023
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1912.11943
confidence intervalscentral limit theorembias correctionvariance estimationLassohigh-dimensional linear modelsStein's formulaconvex regularizationGaussian Poincaré inequality
Estimation in multivariate analysis (62H12) Ridge regression; shrinkage estimators (Lasso) (62J07) Robustness and adaptive procedures (parametric inference) (62F35) Nonparametric tolerance and confidence regions (62G15)
Related Items (4)
Cites Work
- Unnamed Item
- Unnamed Item
- On asymptotically optimal confidence regions and tests for high-dimensional models
- Nearly unbiased variable selection under minimax concave penalty
- High dimensional robust M-estimation: asymptotic variance via approximate message passing
- Iterative hard thresholding for compressed sensing
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Fluctuations of eigenvalues and second order Poincaré inequalities
- The smallest eigenvalue of a large dimensional Wishart matrix
- Estimation of the mean of a multivariate normal distribution
- Asymptotics for high dimensional regression \(M\)-estimates: fixed design results
- Bounds on the prediction error of penalized least squares estimators with convex penalty
- Regularization and the small-ball method. I: Sparse recovery
- Adaptive estimation of a quadratic functional by model selection.
- The Lasso problem and uniqueness
- Slope meets Lasso: improved oracle bounds and optimality
- Debiasing the Lasso: optimal sample size for Gaussian designs
- Confidence intervals for high-dimensional linear regression: minimax rates and adaptivity
- Second-order Stein: SURE for SURE and other applications in high-dimensional inference
- The distribution of the Lasso: uniform control over sparse balls and adaptive parameter tuning
- Fundamental barriers to high-dimensional regression with convex penalties
- De-biasing the Lasso with degrees-of-freedom adjustment
- Sorted concave penalized regression
- Simultaneous analysis of Lasso and Dantzig selector
- The degrees of freedom of partly smooth regularizers
- Sparse Matrix Inversion with Scaled Lasso
- Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
- On robust regression with high-dimensional predictors
- Hypothesis Testing in High-Dimensional Regression Under the Gaussian Random Design Model: Asymptotic Theory
- Compressed Sensing: How Sharp Is the Restricted Isometry Property?
- Scaled sparse linear regression
- Eigenvalues and Condition Numbers of Random Matrices
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- High-Dimensional Probability
- Precise Error Analysis of Regularized <inline-formula> <tex-math notation="LaTeX">$M$ </tex-math> </inline-formula>-Estimators in High Dimensions
- Learning curves of generic features maps for realistic datasets with a teacher-student model*
- Algorithmic Analysis and Statistical Estimation of SLOPE via Approximate Message Passing
- A modern maximum-likelihood theory for high-dimensional logistic regression
- The LASSO Risk for Gaussian Matrices
- Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models
- Mean Field Models for Spin Glasses
- Convex functions and their applications. A contemporary approach
This page was built for publication: Debiasing convex regularized estimators and interval estimation in linear models