Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
From MaRDI portal
Publication:4652003
DOI10.1137/S1052623403425629zbMath1106.90059WikidataQ57392926 ScholiaQ57392926MaRDI QIDQ4652003
Publication date: 23 February 2005
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Related Items
A new efficient algorithm for finding common fixed points of multivalued demicontractive mappings and solutions of split generalized equilibrium problems in Hilbert spaces, Reducing the Complexity of Two Classes of Optimization Problems by Inexact Accelerated Proximal Gradient Method, Accelerated Bregman Primal-Dual Methods Applied to Optimal Transport and Wasserstein Barycenter Problems, An adaptive analog of Nesterov's method for variational inequalities with a strongly monotone operator, New Primal-Dual Algorithms for a Class of Nonsmooth and Nonlinear Convex-Concave Minimax Problems, A Hybrid Proximal Extragradient Self-Concordant Primal Barrier Method for Monotone Variational Inequalities, New First-Order Algorithms for Stochastic Variational Inequalities, Accelerating Block-Decomposition First-Order Methods for Solving Composite Saddle-Point and Two-Player Nash Equilibrium Problems, A Novel Algorithm with Self-adaptive Technique for Solving Variational Inequalities in Banach Spaces, Using Nemirovski's Mirror-Prox method as basic procedure in Chubanov's method for solving homogeneous feasibility problems, Revisiting linearized Bregman iterations under Lipschitz-like convexity condition, Structured Sparsity: Discrete and Convex Approaches, Rescaled Coordinate Descent Methods for Linear Programming, Iterative Methods for the Elastography Inverse Problem of Locating Tumors, An Accelerated HPE-Type Algorithm for a Class of Composite Convex-Concave Saddle-Point Problems, A Proximal Strictly Contractive Peaceman--Rachford Splitting Method for Convex Programming with Applications to Imaging, A Level-Set Method for Convex Optimization with a Feasible Solution Path, Accelerated Extra-Gradient Descent: A Novel Accelerated First-Order Method, Accelerated Stochastic Algorithms for Convex-Concave Saddle-Point Problems, Two Steps at a Time---Taking GAN Training in Stride with Tseng's Method, Adaptive two-stage Bregman method for variational inequalities, Potential Function-Based Framework for Minimizing Gradients in Convex and Min-Max Optimization, On the Number of Iterations for Dantzig--Wolfe Optimization and Packing-Covering Approximation Algorithms, Nearly linear-time packing and covering LP solvers. Nearly linear-time packing and covering LP solvers, achieving width-independence and \(=(1/\varepsilon)\)-convergence, First-Order Methods for Problems with $O$(1) Functional Constraints Can Have Almost the Same Convergence Rate as for Unconstrained Problems, Simple and Optimal Methods for Stochastic Variational Inequalities, I: Operator Extrapolation, Unifying mirror descent and dual averaging, A unified analysis of variational inequality methods: variance reduction, sampling, quantization, and coordinate descent, Sion’s Minimax Theorem in Geodesic Metric Spaces and a Riemannian Extragradient Algorithm, Cyclic Coordinate Dual Averaging with Extrapolation, Customized alternating direction methods of multipliers for generalized multi-facility Weber problem, Adaptive extraproximal algorithm for the equilibrium problem in Hadamard spaces, A stochastic variance-reduced accelerated primal-dual method for finite-sum saddle-point problems, The landscape of the proximal point method for nonconvex-nonconcave minimax optimization, A unified single-loop alternating gradient projection algorithm for nonconvex-concave and convex-nonconcave minimax problems, Accelerated variance-reduced methods for saddle-point problems, Riemannian Hamiltonian Methods for Min-Max Optimization on Manifolds, A \(J\)-symmetric quasi-Newton method for minimax problems, Optimal algorithms for differentially private stochastic monotone variational inequalities and saddle-point problems, No-regret algorithms in on-line learning, games and convex optimization, Optimal analysis of method with batching for monotone stochastic finite-sum variational inequalities, Unnamed Item, No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization, Robust Accelerated Primal-Dual Methods for Computing Saddle Points, Variable sample-size operator extrapolation algorithm for stochastic mixed variational inequalities, Optimality Conditions for Nonsmooth Nonconvex-Nonconcave Min-Max Problems and Generative Adversarial Networks, An accelerated minimax algorithm for convex-concave saddle point problems with nonsmooth coupling function, A randomized progressive hedging algorithm for stochastic variational inequality, Stochastic projective splitting, A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization, Stochastic first-order methods for convex and nonconvex functional constrained optimization, Variational Gram Functions: Convex Analysis and Optimization, An Inverse-Adjusted Best Response Algorithm for Nash Equilibria, Randomized first order algorithms with applications to \(\ell _{1}\)-minimization, Rescaling Algorithms for Linear Conic Feasibility, An implicit gradient-descent procedure for minimax problems, On the \(O(1/t)\) convergence rate of the LQP prediction-correction method, Unnamed Item, Unnamed Item, Online First-Order Framework for Robust Convex Optimization, Complexity of first-order inexact Lagrangian and penalty methods for conic convex programming, Analysis and Numerical Solution of a Modular Convex Nash Equilibrium Problem, Convergence Rate of $\mathcal{O}(1/k)$ for Optimistic Gradient and Extragradient Methods in Smooth Convex-Concave Saddle Point Problems, The Approximate Duality Gap Technique: A Unified Theory of First-Order Methods, An alternating direction method of multipliers with a worst-case $O(1/n^2)$ convergence rate, Partial Lagrangian relaxation for the unbalanced orthogonal Procrustes problem, Proximal extrapolated gradient methods for variational inequalities, A Method with Convergence Rates for Optimization Problems with Variational Inequality Constraints, Conditional Gradient Methods for Convex Optimization with General Affine and Nonlinear Constraints, Unnamed Item, Efficient Search of First-Order Nash Equilibria in Nonconvex-Concave Smooth Min-Max Problems, An Accelerated Inexact Proximal Point Method for Solving Nonconvex-Concave Min-Max Problems, Extragradient Method with Variance Reduction for Stochastic Variational Inequalities, An introduction to continuous optimization for imaging, An acceleration procedure for optimal first-order methods, Iteration-complexity of first-order augmented Lagrangian methods for convex programming, A Majorized ADMM with Indefinite Proximal Terms for Linearly Constrained Convex Composite Optimization, Bundle-level type methods uniformly optimal for smooth and nonsmooth convex optimization, An $\mathcal O(1/{k})$ Convergence Rate for the Variable Stepsize Bregman Operator Splitting Algorithm, Discussion on: ``Why is resorting to fate wise? A critical look at randomized algorithms in systems and control, Non-stationary First-Order Primal-Dual Algorithms with Faster Convergence Rates, Incremental Constraint Projection Methods for Monotone Stochastic Variational Inequalities, Sparse Learning for Large-Scale and High-Dimensional Data: A Randomized Convex-Concave Optimization Approach, Solving Large-Scale Optimization Problems with a Convergence Rate Independent of Grid Size, A Subgradient Method for Free Material Design, A Primal-Dual Algorithm with Line Search for General Convex-Concave Saddle Point Problems, Solving variational inequalities with Stochastic Mirror-Prox algorithm, Scalable Semidefinite Programming, Unnamed Item, Regularized HPE-Type Methods for Solving Monotone Inclusions with Improved Pointwise Iteration-Complexity Bounds, On the Convergence of Mirror Descent beyond Stochastic Convex Programming, On Solving Large-Scale Polynomial Convex Problems by Randomized First-Order Algorithms, Unnamed Item, Unnamed Item, An Acousto-electric Inverse Source Problem, Unnamed Item, Primal–dual first-order methods for a class of cone programming, Inexact model: a framework for optimization and variational inequalities, Higher-Order Methods for Convex-Concave Min-Max Optimization and Monotone Variational Inequalities, Smooth monotone stochastic variational inequalities and saddle point problems: a survey, Primal-Dual First-Order Methods for Affinely Constrained Multi-block Saddle Point Problems, Faster first-order primal-dual methods for linear programming using restarts and sharpness, Principled analyses and design of first-order methods with inexact proximal operators, Compression and data similarity: combination of two techniques for communication-efficient solving of distributed variational inequalities, First-order methods for convex optimization, Alternating Proximal-Gradient Steps for (Stochastic) Nonconvex-Concave Minimax Problems, The limited-memory recursive variational Gaussian approximation (L-RVGA), Stochastic Saddle Point Problems with Decision-Dependent Distributions, From Halpern's fixed-point iterations to Nesterov's accelerated interpretations for root-finding problems, The operator splitting schemes revisited: primal-dual gap and degeneracy reduction by a unified analysis, An inexact primal-dual smoothing framework for large-scale non-bilinear saddle point problems, Randomized Lagrangian stochastic approximation for large-scale constrained stochastic Nash games, Recent theoretical advances in decentralized distributed convex optimization, Near-Optimal Decentralized Algorithms for Saddle Point Problems over Time-Varying Networks, On non-ergodic convergence rate of the operator splitting method for a class of variational inequalities, Local saddle points for unconstrained polynomial optimization, Stochastic mirror descent dynamics and their convergence in monotone variational inequalities, On iteration complexity of a first-order primal-dual method for nonlinear convex cone programming, Sublinear time algorithms for approximate semidefinite programming, Accelerated gradient sliding for structured convex optimization, On the ergodic convergence rates of a first-order primal-dual algorithm, Cubic regularized Newton method for the saddle point models: a global and local convergence analysis, On lower iteration complexity bounds for the convex concave saddle point problems, An \(O(s^r)\)-resolution ODE framework for understanding discrete-time algorithms and applications to the linear convergence of minimax problems, Proportional-integral projected gradient method for conic optimization, A simplified view of first order methods for optimization, Extragradient and extrapolation methods with generalized Bregman distances for saddle point problems, Generalized mirror prox algorithm for monotone variational inequalities: Universality and inexact oracle, The saddle point problem of polynomials, A stochastic primal-dual method for a class of nonconvex constrained optimization, On the iteration complexity of some projection methods for monotone linear variational inequalities, On the linear convergence of the general first order primal-dual algorithm, Large-scale semidefinite programming via a saddle point mirror-prox algorithm, Dual extrapolation and its applications to solving variational inequalities and related problems, Inertial self-adaptive Bregman projection method for finite family of variational inequality problems in reflexive Banach spaces, Recovery of high-dimensional sparse signals via \(\ell_1\)-minimization, On the information-adaptive variants of the ADMM: an iteration complexity perspective, Sparse non Gaussian component analysis by semidefinite programming, Approximation accuracy, gradient methods, and error bound for structured convex optimization, Bounded perturbation resilience of extragradient-type methods and their applications, An improved first-order primal-dual algorithm with a new correction step, Inexact first-order primal-dual algorithms, Accelerated schemes for a class of variational inequalities, On the convergence rate of a class of proximal-based decomposition methods for monotone variational inequalities, A first-order primal-dual algorithm for convex problems with applications to imaging, Primal-dual first-order methods with \({\mathcal {O}(1/\varepsilon)}\) iteration-complexity for cone programming, Accelerated linearized Bregman method, On verifiable sufficient conditions for sparse signal recovery via \(\ell_{1}\) minimization, Barrier subgradient method, A double smoothing technique for solving unconstrained nondifferentiable convex optimization problems, An optimal method for stochastic composite optimization, An implementable proximal point algorithmic framework for nuclear norm minimization, Image restoration based on the minimized surface regularization, Iteration-complexity of first-order penalty methods for convex programming, Efficient first-order methods for convex minimization: a constructive approach, Golden ratio algorithms for variational inequalities, On stochastic mirror-prox algorithms for stochastic Cartesian variational inequalities: randomized block coordinate and optimal averaging schemes, A double extrapolation primal-dual algorithm for saddle point problems, Accelerated methods for saddle-point problem, An adaptive two-stage proximal algorithm for equilibrium problems in Hadamard spaces, On the \(O(1/t)\) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators, Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems, Inexact alternating-direction-based contraction methods for separable linearly constrained convex optimization, An alternating extragradient method with non Euclidean projections for saddle point problems, Dual subgradient algorithms for large-scale nonsmooth learning problems, A cyclic block coordinate descent method with generalized gradient projections, Forward-reflected-backward method with variance reduction, Primal-dual proximal splitting and generalized conjugation in non-smooth non-convex optimization, Level-set methods for convex optimization, On the optimal linear convergence rate of a generalized proximal point algorithm, A version of the mirror descent method to solve variational inequalities, Adaptive inexact fast augmented Lagrangian methods for constrained convex optimization, An extragradient-based alternating direction method for convex minimization, Regret bounded by gradual variation for online convex optimization, Korpelevich's method for variational inequality problems in Banach spaces, Nonsymmetric proximal point algorithm with moving proximal centers for variational inequalities: convergence analysis, Estimation of high-dimensional low-rank matrices, Self-concordant barriers for convex approximations of structured convex sets, Dynamic stochastic approximation for multi-stage stochastic optimization, A golden ratio primal-dual algorithm for structured convex optimization, The generalized proximal point algorithm with step size 2 is not necessarily convergent, An optimal randomized incremental gradient method, A simple algorithm for a class of nonsmooth convex-concave saddle-point problems, Primal-dual subgradient methods for convex problems, Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants, A primal-dual prediction-correction algorithm for saddle point optimization, Faster algorithms for extensive-form game solving via improved smoothing functions, Distributionally robust optimization with correlated data from vector autoregressive processes, On the analysis of variance-reduced and randomized projection variants of single projection schemes for monotone stochastic variational inequality problems, Communication-efficient algorithms for decentralized and stochastic optimization, On the convergence rate of Douglas-Rachford operator splitting method, Infinite-dimensional gradient-based descent for alpha-divergence minimisation, Saddle points of rational functions, Iteration complexity of generalized complementarity problems, Exploiting problem structure in optimization under uncertainty via online convex optimization, Self-concordant inclusions: a unified framework for path-following generalized Newton-type algorithms, Weak and strong convergence Bregman extragradient schemes for solving pseudo-monotone and non-Lipschitz variational inequalities, Convergence of two-stage method with Bregman divergence for solving variational inequalities, Bregman extragradient method with monotone rule of step adjustment, Subgradient methods for saddle-point problems, A telescopic Bregmanian proximal gradient method without the global Lipschitz continuity assumption, Bregman subgradient extragradient method with monotone self-adjustment stepsize for solving pseudo-monotone variational inequalities and fixed point problems, On the resolution of misspecified convex optimization and monotone variational inequality problems, An adaptive proximal method for variational inequalities, An efficient primal dual prox method for non-smooth optimization, On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators, Convergence of the method of extrapolation from the past for variational inequalities in uniformly convex Banach spaces, Convergence of the operator extrapolation method for variational inequalities in Banach spaces, Learning in nonatomic games. I: Finite action spaces and population games, Mirror Prox algorithm for multi-term composite minimization and semi-separable problems, On the efficiency of a randomized mirror descent algorithm in online optimization problems, PPA-like contraction methods for convex optimization: a framework using variational inequality approach, Solving variational inequalities with monotone operators on domains given by linear minimization oracles, A semi-definite programming approach for robust tracking