Convergence Rate of $\mathcal{O}(1/k)$ for Optimistic Gradient and Extragradient Methods in Smooth Convex-Concave Saddle Point Problems
From MaRDI portal
Publication:5139836
DOI10.1137/19M127375XzbMath1454.90057arXiv1906.01115MaRDI QIDQ5139836
Sarath Pattathil, Aryan Mokhtari, Asuman Ozdaglar
Publication date: 11 December 2020
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1906.01115
Related Items
New Primal-Dual Algorithms for a Class of Nonsmooth and Nonlinear Convex-Concave Minimax Problems, New First-Order Algorithms for Stochastic Variational Inequalities, Cubic regularized Newton method for the saddle point models: a global and local convergence analysis, On lower iteration complexity bounds for the convex concave saddle point problems, An \(O(s^r)\)-resolution ODE framework for understanding discrete-time algorithms and applications to the linear convergence of minimax problems, Extragradient and extrapolation methods with generalized Bregman distances for saddle point problems, Continuous-Time Convergence Rates in Potential and Monotone Games, Two Steps at a Time---Taking GAN Training in Stride with Tseng's Method, Unified linear convergence of first-order primal-dual algorithms for saddle point problems, A unified primal-dual algorithm framework for inequality constrained problems, Sion’s Minimax Theorem in Geodesic Metric Spaces and a Riemannian Extragradient Algorithm, Minimax Problems with Coupled Linear Constraints: Computational Complexity and Duality, Transformed primal-dual methods for nonlinear saddle point systems, The landscape of the proximal point method for nonconvex-nonconcave minimax optimization, Riemannian Hamiltonian Methods for Min-Max Optimization on Manifolds, Optimal analysis of method with batching for monotone stochastic finite-sum variational inequalities, Unnamed Item, No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization, Primal-Dual First-Order Methods for Affinely Constrained Multi-block Saddle Point Problems, Faster first-order primal-dual methods for linear programming using restarts and sharpness, Stochastic Saddle Point Problems with Decision-Dependent Distributions, Unnamed Item, Forward-reflected-backward method with variance reduction, Unnamed Item, Unnamed Item, Unnamed Item, A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods, Gradient methods for solving Stackelberg games, On the analysis of variance-reduced and randomized projection variants of single projection schemes for monotone stochastic variational inequality problems
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Subgradient methods for saddle-point problems
- On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space
- A modification of the Arrow-Hurwicz method for search of saddle points
- Enlargement of monotone operators with applications to variational inequalities
- Introductory lectures on convex optimization. A basic course.
- On linear convergence of iterative methods for the variational inequality problem
- A first-order primal-dual algorithm for convex problems with applications to imaging
- On the Complexity of the Hybrid Proximal Extragradient Method for the Iterates and the Ergodic Mean
- On the Convergence of the Proximal Point Algorithm for Convex Minimization
- New Proximal Point Algorithms for Convex Minimization
- Monotone Operators and the Proximal Point Algorithm
- Convergence Rates in Forward--Backward Splitting
- Convergence of Proximal-Like Algorithms
- A First-Order Primal-Dual Algorithm with Linesearch
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- A Forward-Backward Splitting Method for Monotone Inclusions Without Cocoercivity
- Optimal Primal-Dual Methods for a Class of Saddle Point Problems
- Convex analysis and monotone operator theory in Hilbert spaces