Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions
From MaRDI portal
Publication:1683689
DOI10.1007/s10107-017-1114-yzbMath1386.90116OpenAlexW2587436146WikidataQ47263899 ScholiaQ47263899MaRDI QIDQ1683689
Hongcheng Liu, Yinyu Ye, Tao Yao, Run-Ze Li
Publication date: 1 December 2017
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: http://europepmc.org/articles/pmc5720392
Ridge regression; shrinkage estimators (Lasso) (62J07) Linear regression; mixed models (62J05) Analysis of algorithms and problem complexity (68Q25) Applications of mathematical programming (90C90) Nonconvex programming, global optimization (90C26)
Related Items
A second-order optimality condition with first- and second-order complementarity associated with global convergence of algorithms, High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks, Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions, Some theoretical limitations of second-order algorithms for smooth constrained optimization, Solving constrained nonsmooth group sparse optimization via group Capped-\(\ell_1\) relaxation and group smoothing proximal gradient algorithm, Regularized sample average approximation for high-dimensional stochastic optimization under low-rankness, Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression, Regularized Linear Programming Discriminant Rule with Folded Concave Penalty for Ultrahigh-Dimensional Data, Linear-step solvability of some folded concave and singly-parametric sparse optimization problems, Unnamed Item, A cubic spline penalty for sparse approximation under tight frame balanced model, Augmented Lagrangians with constrained subproblems and convergence to second-order stationary points, Computation of second-order directional stationary points for group sparse optimization, Hessian Barrier Algorithms for Linearly Constrained Optimization Problems, Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming, Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary
Cites Work
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- Global solutions to folded concave penalized nonconvex learning
- Random design analysis of ridge regression
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Least quantile regression via modern optimization
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- How close is the sample covariance matrix to the actual covariance matrix?
- A tail inequality for quadratic forms of subgaussian random vectors
- One-step sparse estimates in nonconcave penalized likelihood models
- On affine scaling algorithms for nonconvex quadratic programming
- On the complexity of approximating a KKT point of quadratic programming
- Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions
- On the conditions used to prove oracle results for the Lasso
- Simultaneous analysis of Lasso and Dantzig selector
- Complexity of unconstrained \(L_2 - L_p\) minimization
- Calibrating nonconvex penalized regression in ultra-high dimension
- Cubic regularization of Newton method and its global performance
- Strong oracle optimality of folded concave penalized estimation
- Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization
- Variable selection using MM algorithms
- Quadratic programming is in NP
- Reconstruction From Anisotropic Random Measurements
- Lower Bound Theory of Nonzero Entries in Solutions of $\ell_2$-$\ell_p$ Minimization
- Decoding by Linear Programming
- Quantitative estimates of the convergence of the empirical covariance matrix in log-concave ensembles
- Complexity of penalized likelihood estimation
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Nonconcave Penalized Likelihood With NP-Dimensionality
- Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- A general theory of concave regularization for high-dimensional sparse estimation problems