Inertial proximal incremental aggregated gradient method with linear convergence guarantees
From MaRDI portal
Publication:2084299
DOI10.1007/s00186-022-00790-0zbMath1503.90137OpenAlexW4283461167MaRDI QIDQ2084299
Wei Peng, Hui Zhang, Xiaoya Zhang
Publication date: 18 October 2022
Published in: Mathematical Methods of Operations Research (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s00186-022-00790-0
Lyapunov functionlinear convergenceinertial methodquadratic growth conditionincremental aggregated gradient
Nonconvex programming, global optimization (90C26) Nonlinear programming (90C30) Applications of operator theory in optimization, convex analysis, mathematical programming, economics (47N10)
Uses Software
Cites Work
- Unnamed Item
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Gradient methods for minimizing composite functions
- Fast first-order methods for composite convex optimization with backtracking
- iPiasco: inertial proximal algorithm for strongly convex optimization
- Introductory lectures on convex optimization. A basic course.
- Local convergence of the heavy-ball method and iPiano for non-convex optimization
- From error bounds to the complexity of first-order descent methods for convex functions
- Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods
- Local and global convergence of a general inertial proximal splitting scheme for minimizing composite functions
- Proximal-like incremental aggregated gradient method with Bregman distance in weakly convex optimization problems
- Variable smoothing incremental aggregated gradient method for nonsmooth nonconvex regularized optimization
- Convergence rates for an inertial algorithm of gradient type associated to a smooth non-convex minimization
- Nonconvex proximal incremental aggregated gradient method with linear convergence
- New analysis of linear convergence of gradient-type methods via unifying error bound conditions
- Linearly convergent away-step conditional gradient for non-strongly convex functions
- Linear convergence of first order methods for non-strongly convex optimization
- On the maximal monotonicity of subdifferential mappings
- Linear Convergence of Proximal Gradient Algorithm with Extrapolation for a Class of Nonconvex Nonsmooth Minimization Problems
- An Asynchronous Mini-Batch Algorithm for Regularized Stochastic Optimization
- Inertial Proximal Alternating Linearized Minimization (iPALM) for Nonconvex and Nonsmooth Problems
- The Group Lasso for Logistic Regression
- Quasi-Nonexpansive Iterations on the Affine Hull of Orbits: From Mann's Mean Value Algorithm to Inertial Methods
- First-Order Methods in Optimization
- Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods
- Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions
- Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- Tilt Stability, Uniform Quadratic Growth, and Strong Metric Regularity of the Subdifferential
- A Convergent Incremental Gradient Method with a Constant Step Size
- Some methods of speeding up the convergence of iteration methods
- Sur le problème de la division
- Bregman Finito/MISO for Nonconvex Regularized Finite Sum Minimization without Lipschitz Gradient Continuity
- An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping