On the linear convergence of a proximal gradient method for a class of nonsmooth convex minimization problems
From MaRDI portal
Publication:457540
DOI10.1007/s40305-013-0015-xzbMath1334.90127OpenAlexW2089767665MaRDI QIDQ457540
Zhi-Quan Luo, Jiao-Jiao Jiang, Hai-Bin Zhang
Publication date: 29 September 2014
Published in: Journal of the Operations Research Society of China (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s40305-013-0015-x
Related Items
On globally Q-linear convergence of a splitting method for group Lasso, A First-Order Optimization Algorithm for Statistical Learning with Hierarchical Sparsity Structure, A unified approach to error bounds for structured convex optimization problems, A dual semismooth Newton based augmented Lagrangian method for large-scale linearly constrained sparse group square-root Lasso problems, A modified proximal gradient method for a family of nonsmooth convex optimization problems, On the linear convergence of the approximate proximal splitting method for non-smooth convex optimization, A family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo-Tseng error bound property, Decomposable norm minimization with proximal-gradient homotopy algorithm, On the linear convergence of the alternating direction method of multipliers, Iteration complexity analysis of block coordinate descent methods, Linear convergence of inexact descent method and inexact proximal gradient algorithms for lower-order regularization problems, Proximal gradient method with automatic selection of the parameter by automatic differentiation, An efficient Hessian based algorithm for solving large-scale sparse group Lasso problems, The forward-backward splitting method and its convergence rate for the minimization of the sum of two functions in Banach spaces, A parallel line search subspace correction method for composite convex optimization, A Block Successive Upper-Bound Minimization Method of Multipliers for Linearly Constrained Convex Optimization, Optimal portfolio selections via \(\ell_{1, 2}\)-norm regularization, Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems, On the proximal Landweber Newton method for a class of nonsmooth convex problems
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- The Adaptive Lasso and Its Oracle Properties
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- A coordinate gradient descent method for nonsmooth separable minimization
- Introductory lectures on convex optimization. A basic course.
- Sparse group Lasso and high dimensional multinomial classification
- The Group Lasso for Logistic Regression
- On the Linear Convergence of Descent Methods for Convex Essentially Smooth Minimization
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Sparse Reconstruction by Separable Approximation
- Regularization and Variable Selection Via the Elastic Net
- Model Selection and Estimation in Regression with Grouped Variables
- Signal Recovery by Proximal Forward-Backward Splitting
- Convex Analysis