Two Steps at a Time---Taking GAN Training in Stride with Tseng's Method
From MaRDI portal
Publication:5089720
DOI10.1137/21M1420939zbMath1492.65175arXiv2006.09033OpenAlexW3035573799MaRDI QIDQ5089720
Axel Böhm, Ernö Robert Csetnek, Michael Sedlmayer, Radu Ioan Boţ
Publication date: 15 July 2022
Published in: SIAM Journal on Mathematics of Data Science (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2006.09033
Minimax problems in mathematical programming (90C47) Stochastic programming (90C15) Numerical methods for variational inequalities and related problems (65K15)
Related Items
Variable sample-size operator extrapolation algorithm for stochastic mixed variational inequalities ⋮ An accelerated minimax algorithm for convex-concave saddle point problems with nonsmooth coupling function ⋮ Alternating Proximal-Gradient Steps for (Stochastic) Nonconvex-Concave Minimax Problems ⋮ A modified Tseng's algorithm with extrapolation from the past for pseudo-monotone variational inequalities ⋮ Tseng’s Algorithm with Extrapolation from the past Endowed with Variable Metrics and Error Terms
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Nonlinear total variation based noise removal algorithms
- Dual extrapolation and its applications to solving variational inequalities and related problems
- Shadow Douglas-Rachford splitting for monotone inclusions
- Applications of a Splitting Algorithm to Decomposition in Convex Programming and Variational Inequalities
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Convex Optimization in Signal Processing and Communications
- A Primal-Dual Algorithm with Line Search for General Convex-Concave Saddle Point Problems
- Minibatch Forward-Backward-Forward Methods for Solving Stochastic Variational Inequalities
- A Forward-Backward Splitting Method for Monotone Inclusions Without Cocoercivity
- Convergence Rate of $\mathcal{O}(1/k)$ for Optimistic Gradient and Extragradient Methods in Smooth Convex-Concave Saddle Point Problems
- Solving variational inequalities with Stochastic Mirror-Prox algorithm
- Projected Reflected Gradient Methods for Monotone Variational Inequalities
- Breaking the Curse of Dimensionality with Convex Neural Networks
- ℓ1 Regularization in Infinite Dimensional Feature Spaces
- Extragradient Method with Variance Reduction for Stochastic Variational Inequalities
- A Stochastic Approximation Method
- Convex analysis and monotone operator theory in Hilbert spaces
- The complexity of constrained min-max optimization