Nearly optimal minimax estimator for high-dimensional sparse linear regression
From MaRDI portal
Publication:385791
DOI10.1214/13-AOS1141zbMath1360.62391arXiv1206.6536MaRDI QIDQ385791
Publication date: 11 December 2013
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1206.6536
linear regressionminimax estimationcompressed sensingnearest neighbor estimatoroptimal minimax estimatororthogonal projection estimatorprojected nearest neighbor estimatorsparsity constraint
Asymptotic properties of nonparametric inference (62G20) Linear regression; mixed models (62J05) Minimax procedures in statistical decision theory (62C20)
Related Items
An Improved Private Mechanism for Small Databases ⋮ Kolmogorov \(n\)-widths of function classes induced by a non-degenerate differential operator: a convex duality approach
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- The Adaptive Lasso and Its Oracle Properties
- Minimax risk over hyperrectangles, and implications
- Invertibility of ``large submatrices with applications to the geometry of Banach spaces and harmonic analysis
- Gelfand numbers of operators with values in a Hilbert space
- Optimal filtering of square-integrable signals in Gaussian noise
- Risk bounds for model selection via penalization
- Minimax risk over \(l_ p\)-balls for \(l_ q\)-error
- Model selection for regression on a fixed design
- Model selection for Gaussian regression with random design
- Geometric optimization problems likely not contained in \(\mathbb A\mathbb P\mathbb X\)
- Nonconcave penalized likelihood with a diverging number of parameters.
- How well can we estimate a sparse vector?
- Simultaneous analysis of Lasso and Dantzig selector
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Approximation and learning by greedy algorithms
- Asymptotically Minimax Adaptive Estimation. I: Upper Bounds. Optimally Adaptive Estimates
- Decoding by Linear Programming
- Smoothed analysis of algorithms
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Adaptive estimation with soft thresholding penalties
- Shifting Inequality and Recovery of Sparse Signals
- The Generic Chaining
- The LASSO Risk for Gaussian Matrices
- The Noise-Sensitivity Phase Transition in Compressed Sensing
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Sparse Recovery With Orthogonal Matching Pursuit Under RIP
- Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise
- An alternative point of view on Lepski's method
- Approximating the Radii of Point Sets
- Introduction to nonparametric estimation
- Compressed sensing