Simple and Optimal Methods for Stochastic Variational Inequalities, I: Operator Extrapolation
DOI10.1137/20M1381678zbMath1497.90204arXiv2011.02987WikidataQ114074130 ScholiaQ114074130MaRDI QIDQ5097022
Guanghui Lan, Georgios Kotsalis, Tianjiao Li
Publication date: 19 August 2022
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2011.02987
Analysis of algorithms and problem complexity (68Q25) Convex programming (90C25) Stochastic programming (90C15) Complementarity and equilibrium problems and variational inequalities (finite dimensions) (aspects of mathematical programming) (90C33) Stochastic approximation (62L20)
Related Items (6)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Solving strongly monotone variational and quasi-variational inequalities
- Dual extrapolation and its applications to solving variational inequalities and related problems
- Accelerated schemes for a class of variational inequalities
- On smoothing, regularization, and averaging in stochastic approximation methods for stochastic variational inequality problems
- Non-Euclidean restricted memory level method for large-scale convex optimization
- On the analysis of variance-reduced and randomized projection variants of single projection schemes for monotone stochastic variational inequality problems
- First-order and stochastic optimization methods for machine learning
- On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators
- Methodes itératives pour les équations et inéquations aux dérivées partielles non linéaires de type monotone. (Iteration methods for nonlinear equations and inequations with partial derivatives of monotone type)
- Interior projection-like methods for monotone variational inequalities
- Approximate policy iteration: a survey and some new methods
- On the Complexity of the Hybrid Proximal Extragradient Method for the Iterates and the Ergodic Mean
- Robust Stochastic Approximation Approach to Stochastic Programming
- Monotone Operators and the Proximal Point Algorithm
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Random Gradient Extrapolation for Distributed and Stochastic Optimization
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- On the Convergence of Stochastic Primal-Dual Hybrid Gradient
- Solving variational inequalities with Stochastic Mirror-Prox algorithm
- Statistical Inference via Convex Optimization
- Projected Reflected Gradient Methods for Monotone Variational Inequalities
- Extragradient Method with Variance Reduction for Stochastic Variational Inequalities
This page was built for publication: Simple and Optimal Methods for Stochastic Variational Inequalities, I: Operator Extrapolation