Accelerated gradient sliding for structured convex optimization
From MaRDI portal
Publication:2141354
DOI10.1007/s10589-022-00365-zzbMath1489.90121arXiv1609.04905OpenAlexW2523244466MaRDI QIDQ2141354
Publication date: 25 May 2022
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1609.04905
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Numerical methods based on nonlinear programming (49M37)
Related Items (3)
Reducing the Complexity of Two Classes of Optimization Problems by Inexact Accelerated Proximal Gradient Method ⋮ A multi-step doubly stabilized bundle method for nonsmooth convex optimization ⋮ Graph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic Optimization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Smooth minimization of non-smooth functions
- Gradient sliding for composite optimization
- On the ergodic convergence rates of a first-order primal-dual algorithm
- Primal-dual first-order methods with \({\mathcal {O}(1/\varepsilon)}\) iteration-complexity for cone programming
- An inertial forward-backward algorithm for monotone inclusions
- On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators
- Introductory lectures on convex optimization. A basic course.
- An algorithm for total variation minimization and applications
- Accelerated schemes for a class of variational inequalities
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems
- Mirror Prox algorithm for multi-term composite minimization and semi-separable problems
- Bundle-level type methods uniformly optimal for smooth and nonsmooth convex optimization
- On the $O(1/n)$ Convergence Rate of the Douglas–Rachford Alternating Direction Method
- Convergence Analysis of Primal-Dual Algorithms for a Saddle-Point Problem: From Contraction Perspective
- A General Framework for a Class of First Order Primal-Dual Algorithms for Convex Optimization in Imaging Science
- NESTA: A Fast and Accurate First-Order Method for Sparse Recovery
- Smoothing Techniques for Computing Nash Equilibria of Sequential Games
- Smooth Optimization with Approximate Gradient
- An Accelerated HPE-Type Algorithm for a Class of Composite Convex-Concave Saddle-Point Problems
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Solving variational inequalities with Stochastic Mirror-Prox algorithm
- Optimal Primal-Dual Methods for a Class of Saddle Point Problems
- An Accelerated Linearized Alternating Direction Method of Multipliers
- Iteration-Complexity of Block-Decomposition Algorithms and the Alternating Direction Method of Multipliers
- Excessive Gap Technique in Nonsmooth Convex Minimization
- Interior Gradient and Proximal Methods for Convex and Conic Optimization
- An introduction to continuous optimization for imaging
This page was built for publication: Accelerated gradient sliding for structured convex optimization