A second-order method for convex1-regularized optimization with active-set prediction
From MaRDI portal
Publication:2815550
DOI10.1080/10556788.2016.1138222zbMath1341.49039arXiv1505.04315OpenAlexW279731301MaRDI QIDQ2815550
Nocedal, Jorge, Nitish Shirish Keskar, Andreas Wächter, Figen Oztoprak
Publication date: 29 June 2016
Published in: Optimization Methods and Software (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1505.04315
second-order methodactive-set prediction\(\ell_1\)-minimizationactive-set correctionsubspace-optimization
Related Items
A Subspace Acceleration Method for Minimization Involving a Group Sparsity-Inducing Regularizer, An active set Newton-CG method for \(\ell_1\) optimization, A Reduced-Space Algorithm for Minimizing $\ell_1$-Regularized Convex Functions, An inexact quasi-Newton algorithm for large-scale \(\ell_1\) optimization with box constraints, Gradient-based method with active set strategy for $\ell _1$ optimization, A Highly Efficient Semismooth Newton Augmented Lagrangian Method for Solving Lasso Problems, A subspace-accelerated split Bregman method for sparse data recovery with joint \(\ell_1\)-type regularizers, An active-set proximal-Newton algorithm for \(\ell_1\) regularized optimization problems with box constraints, FaRSA for ℓ1-regularized convex optimization: local convergence and numerical experience, A limited-memory quasi-Newton algorithm for bound-constrained non-smooth optimization, An Efficient Proximal Block Coordinate Homotopy Method for Large-Scale Sparse Least Squares Problems, A fast conjugate gradient algorithm with active set prediction for ℓ1 optimization, A decomposition method for Lasso problems with zero-sum constraint, A Dimension Reduction Technique for Large-Scale Structured Sparse Optimization Problems with Application to Convex Clustering, An active-set proximal quasi-Newton algorithm for ℓ1-regularized minimization over a sphere constraint
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Sample size selection in optimization methods for machine learning
- Matrix-free interior point method for compressed sensing problems
- A coordinate gradient descent method for nonsmooth separable minimization
- Representations of quasi-Newton matrices and their use in limited memory methods
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Optimization with Sparsity-Inducing Penalties
- On the convergence of an active-set method for ℓ1minimization
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- Proximal Newton-Type Methods for Minimizing Composite Functions
- A Feasible Active Set Method for Strictly Convex Quadratic Problems with Simple Bounds
- A Fast Algorithm for Sparse Reconstruction Based on Shrinkage, Subspace Optimization, and Continuation
- Numerical Optimization
- Sparse Reconstruction by Separable Approximation
- An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- De-noising by soft-thresholding