Sorted concave penalized regression
From MaRDI portal
Publication:2284364
DOI10.1214/18-AOS1759zbMath1435.62262arXiv1712.09941OpenAlexW2982552840MaRDI QIDQ2284364
Publication date: 15 January 2020
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1712.09941
minimax raterestricted eigenvaluepenalized least squaresslopesignal strengthconcave penaltieslocal convex approximationsorted penalties
Estimation in multivariate analysis (62H12) Ridge regression; shrinkage estimators (Lasso) (62J07) Linear regression; mixed models (62J05)
Related Items (7)
On polygenic risk scores for complex traits prediction ⋮ Debiasing convex regularized estimators and interval estimation in linear models ⋮ Matrix completion with nonconvex regularization: spectral operators and scalable algorithms ⋮ Sorted concave penalized regression ⋮ A unified primal dual active set algorithm for nonconvex sparse recovery ⋮ Optimal sparsity testing in linear regression model ⋮ Statistical and computational aspects of learning with complex structure. Abstracts from the workshop held May 5--11, 2019
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- SLOPE is adaptive to unknown sparsity and asymptotically minimax
- Gradient methods for minimizing composite functions
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Fast global convergence of gradient methods for high-dimensional statistical recovery
- SLOPE-adaptive variable selection via convex optimization
- One-step sparse estimates in nonconcave penalized likelihood models
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity
- I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error
- Least angle regression. (With discussion)
- On the conditions used to prove oracle results for the Lasso
- Least squares after model selection in high-dimensional sparse models
- Slope meets Lasso: improved oracle bounds and optimality
- Sorted concave penalized regression
- Simultaneous analysis of Lasso and Dantzig selector
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Sparse Matrix Inversion with Scaled Lasso
- Reconstruction From Anisotropic Random Measurements
- Scaled sparse linear regression
- Decoding by Linear Programming
- Just relax: convex programming methods for identifying sparse signals in noise
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- A new approach to variable selection in least squares problems
- The Spike-and-Slab LASSO
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Estimation And Selection Via Absolute Penalized Convex Minimization And Its Multistage Adaptive Applications
- Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- A general theory of concave regularization for high-dimensional sparse estimation problems
This page was built for publication: Sorted concave penalized regression