Perseus: a simple and optimal high-order method for variational inequalities
DOI10.1007/s10107-024-02075-2MaRDI QIDQ6665392
Michael I. Jordan, Tian-Yi Lin
Publication date: 17 January 2025
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
variational inequalityaccelerationiteration complexitylocal superlinear convergencesimple and optimal high-order method
Analysis of algorithms and problem complexity (68Q25) Abstract computational complexity for mathematical programming problems (90C60) Variational inequalities (49J40) Newton-type methods (49M15) Complementarity and equilibrium problems and variational inequalities (finite dimensions) (aspects of mathematical programming) (90C33) Numerical methods for variational inequalities and related problems (65K15)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Gradient methods for minimizing composite functions
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Lectures on convex optimization
- Finite-dimensional variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications
- Dual extrapolation and its applications to solving variational inequalities and related problems
- On solving trust-region and other regularised subproblems in optimization
- Accelerating the cubic regularization of Newton's method on convex problems
- Monotone (nonlinear) operators in Hilbert space
- A modification of the Arrow-Hurwicz method for search of saddle points
- Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems
- New computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measure
- Accelerated schemes for a class of variational inequalities
- Learning in games with continuous action sets and unknown payoff functions
- Solving variational inequality and fixed point problems by line searches and potential optimization
- An optimal randomized incremental gradient method
- A unifying geometric solution framework and complexity analysis for variational inequalities
- A control-theoretic perspective on optimal high-order optimization
- Local convergence of tensor methods
- Cubic regularized Newton method for the saddle point models: a global and local convergence analysis
- On lower iteration complexity bounds for the convex concave saddle point problems
- Lower bounds for finding stationary points I
- Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems
- Implementable tensor methods in unconstrained convex optimization
- Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants
- Oracle complexity of second-order methods for smooth convex optimization
- On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators
- Adaptive restart for accelerated gradient schemes
- Linear convergence of first order methods for non-strongly convex optimization
- Cournot games with biconcave demand
- Cubic regularization of Newton method and its global performance
- On some non-linear elliptic differential functional equations
- Superfast second-order methods for unconstrained convex optimization
- Inexact accelerated high-order proximal-point methods
- A simple nearly optimal restart scheme for speeding up first-order methods
- Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models
- An accelerated hybrid proximal extragradient method for convex optimization and its implications to second-order methods
- On the Complexity of the Hybrid Proximal Extragradient Method for the Iterates and the Ergodic Mean
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- The Orthogonality Theorem and the Strong-f-Monotonicity Condition for Variational Inequality Algorithms
- Complexity of Variants of Tseng's Modified F-B Splitting and Korpelevich's Methods for Hemivariational Inequalities with Applications to Saddle-point and Convex Optimization Problems
- Product Positioning Under Price Competition
- Optimal methods of smooth convex minimization
- Generalized Descent Methods for Asymmetric Systems of Equations
- A New Projection Method for Variational Inequality Problems
- Averaging Schemes for Variational Inequalities and Systems of Equations
- An Introduction to Variational Inequalities and Their Applications
- On High-order Model Regularization for Constrained Optimization
- Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy
- A variational perspective on accelerated methods in optimization
- Accelerated Regularized Newton Methods for Minimizing Composite Convex Functions
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Random Gradient Extrapolation for Distributed and Stochastic Optimization
- Solving the Trust-Region Subproblem using the Lanczos Method
- Iteration-Complexity of a Newton Proximal Extragradient Method for Monotone Variational Inequalities and Inclusion Problems
- A Modified Forward-Backward Splitting Method for Maximal Monotone Mappings
- Tensor Methods for Minimizing Convex Functions with Hölder Continuous Higher-Order Derivatives
- Unified Acceleration of High-Order Algorithms under General Hölder Continuity
- Inexact High-Order Proximal-Point Methods with Auxiliary Search Procedure
- Variants of the A-HPE and large-step A-HPE algorithms for strongly convex problems with applications to accelerated high-order tensor methods
- Simple and Optimal Methods for Stochastic Variational Inequalities, I: Operator Extrapolation
- Evaluation Complexity of Algorithms for Nonconvex Optimization: Theory, Computation and Perspectives
- A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization
- Convergence Rate of $\mathcal{O}(1/k)$ for Optimistic Gradient and Extragradient Methods in Smooth Convex-Concave Saddle Point Problems
- Gradient Descent Finds the Cubic-Regularized Nonconvex Newton Step
- Adaptive restart of accelerated gradient methods under local quadratic growth condition
- Equilibrium Points of Bimatrix Games
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms
- Prediction, Learning, and Games
- The Approximation of Fixed Points of a Continuous Mapping
- Characterizations of pseudomonotone maps and economic equilibrium
- Regularized Newton Methods for Minimizing Functions with Hölder Continuous Hessians
- Extragradient Method with Variance Reduction for Stochastic Variational Inequalities
- On inexact solution of auxiliary problems in tensor methods for convex optimization
- Higher-Order Methods for Convex-Concave Min-Max Optimization and Monotone Variational Inequalities
- Convex analysis and monotone operator theory in Hilbert spaces
- The complexity of constrained min-max optimization
- Monotone Inclusions, Acceleration, and Closed-Loop Control
- Some adaptive first-order methods for variational inequalities with relatively strongly monotone operators and generalized smoothness
- Adaptive Third-Order Methods for Composite Convex Optimization
This page was built for publication: Perseus: a simple and optimal high-order method for variational inequalities
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6665392)