A unifying framework of high-dimensional sparse estimation with difference-of-convex (DC) regularizations
From MaRDI portal
Publication:2163076
DOI10.1214/21-STS832MaRDI QIDQ2163076
Jong-Shi Pang, Xiaoming Huo, Shanshan Cao
Publication date: 10 August 2022
Published in: Statistical Science (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1812.07130
asymptotic optimalitymodel selection consistencynonconvex regularization(generalized) linear regressionDC algorithmsdifference of convex (DC) functionshigh-dimensional sparse estimation
Related Items
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On asymptotically optimal confidence regions and tests for high-dimensional models
- Nearly unbiased variable selection under minimax concave penalty
- A unified approach to model selection and sparse recovery using regularized least squares
- The Adaptive Lasso and Its Oracle Properties
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Statistics for high-dimensional data. Methods, theory and applications.
- On functions representable as a difference of convex functions
- One-step sparse estimates in nonconcave penalized likelihood models
- Convex analysis approach to d. c. programming: Theory, algorithms and applications
- On the pervasiveness of difference-convexity in optimization and statistics
- Minimization of transformed \(L_1\) penalty: theory, difference of convex function algorithm, and robust application in compressed sensing
- The DC (Difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems
- Nonconcave penalized likelihood with a diverging number of parameters.
- Least angle regression. (With discussion)
- DC programming: overview.
- Simultaneous analysis of Lasso and Dantzig selector
- Multi-stage convex relaxation for feature selection
- Calibrating nonconvex penalized regression in ultra-high dimension
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Strong oracle optimality of folded concave penalized estimation
- A Proof of Convergence of the Concave-Convex Procedure Using Zangwill's Theory
- Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
- Computing B-Stationary Points of Nonsmooth DC Programs
- Hypothesis Testing in High-Dimensional Regression Under the Gaussian Random Design Model: Asymptotic Theory
- SparseNet: Coordinate Descent With Nonconvex Penalties
- Global minimization of a difference of two convex functions
- The Concave-Convex Procedure
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Difference-of-Convex Learning: Directional Stationarity, Optimality, and Sparsity
- Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models