Calibrated zero-norm regularized LS estimator for high-dimensional error-in-variables regression
From MaRDI portal
Publication:5004050
zbMath1470.62112arXiv1804.09312MaRDI QIDQ5004050
Shaohua Pan, Shujun Bi, Ting Tao
Publication date: 30 July 2021
Full work available at URL: https://arxiv.org/abs/1804.09312
high-dimensionalerror-in-variables regressionmulti-stage convex relaxationconvex conditioned Lasso (CoCoLasso) methodzero-norm regularized LS ( least squares) estimator
Cites Work
- Unnamed Item
- Unnamed Item
- Missing values: sparse inverse covariance estimation and an extension to sparse regression
- Nearly unbiased variable selection under minimax concave penalty
- The Adaptive Lasso and Its Oracle Properties
- An \(\{\ell_{1},\ell_{2},\ell_{\infty}\}\)-regularization approach to high-dimensional errors-in-variables models
- Statistics for high-dimensional data. Methods, theory and applications.
- Sparse recovery under matrix uncertainty
- CoCoLasso for high-dimensional error-in-variables regression
- High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity
- One-step sparse estimates in nonconcave penalized likelihood models
- On the conditions used to prove oracle results for the Lasso
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- A Selective Overview of Variable Selection in High Dimensional Feature Space (Invited Review Article)
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- A Highly Efficient Semismooth Newton Augmented Lagrangian Method for Solving Lasso Problems
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Regularization and Variable Selection Via the Elastic Net
- Improved matrix uncertainty selector
- Convex Analysis
- Error Distribution for Gene Expression Data
- Linear and Conic Programming Estimators in High Dimensional Errors-in-variables Models
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers