Nonconvex proximal incremental aggregated gradient method with linear convergence
From MaRDI portal
Publication:2275279
DOI10.1007/s10957-019-01538-3zbMath1429.90057arXiv1804.02571OpenAlexW3102533266WikidataQ127855529 ScholiaQ127855529MaRDI QIDQ2275279
Hui Zhang, Wei Peng, Xiaoya Zhang
Publication date: 2 October 2019
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1804.02571
Large-scale problems in mathematical programming (90C06) Nonconvex programming, global optimization (90C26) Stochastic programming (90C15)
Related Items (7)
Proximal variable smoothing method for three-composite nonconvex nonsmooth minimization with a linear operator ⋮ An incremental aggregated proximal ADMM for linearly constrained nonconvex optimization with application to sparse logistic regression problems ⋮ Proximal-like incremental aggregated gradient method with Bregman distance in weakly convex optimization problems ⋮ Variable smoothing incremental aggregated gradient method for nonsmooth nonconvex regularized optimization ⋮ Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions ⋮ Inertial proximal incremental aggregated gradient method with linear convergence guarantees ⋮ Linear convergence of proximal incremental aggregated gradient method for nonconvex nonsmooth minimization problems
Cites Work
- New properties of forward-backward splitting and a practical proximal-descent algorithm
- Proximal alternating linearized minimization for nonconvex and nonsmooth problems
- A coordinate gradient descent method for linearly constrained smooth optimization and support vector machines training
- A coordinate gradient descent method for nonsmooth separable minimization
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Linear Convergence of Proximal Gradient Algorithm with Extrapolation for a Class of Nonconvex Nonsmooth Minimization Problems
- An Asynchronous Mini-Batch Algorithm for Regularized Stochastic Optimization
- Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality
- On the Linear Convergence of Descent Methods for Convex Essentially Smooth Minimization
- On the Convergence Rate of Dual Ascent Methods for Linearly Constrained Convex Minimization
- Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods
- Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions
- A Linearly Convergent Dual-Based Gradient Projection Algorithm for Quadratically Constrained Convex Minimization
- Signal Recovery by Proximal Forward-Backward Splitting
This page was built for publication: Nonconvex proximal incremental aggregated gradient method with linear convergence