Stochastic Extragradient: General Analysis and Improved Rates

From MaRDI portal
Publication:6383212

arXiv2111.08611MaRDI QIDQ6383212

Nicolas Loizou, Gauthier Gidel, Hugo Berard, Eduard Gorbunov

Publication date: 16 November 2021

Abstract: The Stochastic Extragradient (SEG) method is one of the most popular algorithms for solving min-max optimization and variational inequalities problems (VIP) appearing in various machine learning tasks. However, several important questions regarding the convergence properties of SEG are still open, including the sampling of stochastic gradients, mini-batching, convergence guarantees for the monotone finite-sum variational inequalities with possibly non-monotone terms, and others. To address these questions, in this paper, we develop a novel theoretical framework that allows us to analyze several variants of SEG in a unified manner. Besides standard setups, like Same-Sample SEG under Lipschitzness and monotonicity or Independent-Samples SEG under uniformly bounded variance, our approach allows us to analyze variants of SEG that were never explicitly considered in the literature before. Notably, we analyze SEG with arbitrary sampling which includes importance sampling and various mini-batching strategies as special cases. Our rates for the new variants of SEG outperform the current state-of-the-art convergence guarantees and rely on less restrictive assumptions.




Has companion code repository: https://github.com/hugobb/stochastic-extragradient








This page was built for publication: Stochastic Extragradient: General Analysis and Improved Rates