A truncated Newton algorithm for nonconvex sparse recovery
From MaRDI portal
Publication:2143093
DOI10.1016/j.apnum.2022.04.006zbMath1493.90180OpenAlexW4223937017MaRDI QIDQ2143093
Wanyou Cheng, Hongsheng Chen, Jin Yun Yuan
Publication date: 30 May 2022
Published in: Applied Numerical Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.apnum.2022.04.006
Uses Software
Cites Work
- Nearly unbiased variable selection under minimax concave penalty
- The Adaptive Lasso and Its Oracle Properties
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Enhancing sparsity by reweighted \(\ell _{1}\) minimization
- Feature selection in machine learning: an exact penalty approach using a difference of convex function algorithm
- A coordinate gradient descent method for nonsmooth separable minimization
- A truncated Newton method with non-monotone line search for unconstrained optimization
- Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis
- Minimization of transformed \(L_1\) penalty: theory, difference of convex function algorithm, and robust application in compressed sensing
- A unified primal dual active set algorithm for nonconvex sparse recovery
- Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization
- Optimality Conditions and a Smoothing Trust Region Newton Method for NonLipschitz Optimization
- A Fast Algorithm for Sparse Reconstruction Based on Shrinkage, Subspace Optimization, and Continuation
- Lower Bound Theory of Nonzero Entries in Solutions of $\ell_2$-$\ell_p$ Minimization
- Gradient-Based Methods for Sparse Recovery
- From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Sparse Reconstruction by Separable Approximation
- Gradient-based method with active set strategy for $\ell _1$ optimization
- A Nonmonotone Line Search Technique for Newton’s Method
- Difference-of-Convex Learning: Directional Stationarity, Optimality, and Sparsity
- A Unified View of Exact Continuous Penalties for $\ell_2$-$\ell_0$ Minimization
- Fast Image Recovery Using Variable Splitting and Constrained Optimization
- Benchmarking optimization software with performance profiles.