Projected subgradient methods with non-Euclidean distances for non-differentiable convex minimization and variational inequalities
From MaRDI portal
Publication:1016351
DOI10.1007/s10107-007-0147-zzbMath1190.90118OpenAlexW2161209207MaRDI QIDQ1016351
Alfred Auslender, Marc Teboulle
Publication date: 5 May 2009
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10107-007-0147-z
Numerical mathematical programming methods (65K05) Convex programming (90C25) Complementarity and equilibrium problems and variational inequalities (finite dimensions) (aspects of mathematical programming) (90C33)
Related Items
Interior quasi-subgradient method with non-Euclidean distances for constrained quasi-convex optimization problems in Hilbert spaces, Iterative Methods for the Elastography Inverse Problem of Locating Tumors, A simplified view of first order methods for optimization, A fast dual proximal gradient algorithm for convex minimization and applications, A penalty algorithm for solving convex separable knapsack problems, Stochastic composition optimization of functions without Lipschitz continuous gradient, Self-adaptive gradient projection algorithms for variational inequalities involving non-Lipschitz continuous operators, No-regret algorithms in on-line learning, games and convex optimization, Explainable bilevel optimization: an application to the Helsinki Deblur Challenge, First-order methods for convex optimization, Evolution differential inclusion with projection for solving constrained nonsmooth convex optimization in Hilbert space, An alternating extragradient method with non Euclidean projections for saddle point problems, Unnamed Item, Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions, Inertial proximal gradient methods with Bregman regularization for a class of nonconvex optimization problems, Variable Metric Inexact Line-Search-Based Methods for Nonsmooth Optimization, Scaling Techniques for $\epsilon$-Subgradient Methods, Abstract convergence theorem for quasi-convex optimization problems with applications, Subgradient methods for saddle-point problems, On linear convergence of non-Euclidean gradient methods without strong convexity and Lipschitz gradient continuity, Convergence rates of subgradient methods for quasi-convex optimization problems, Asymptotic behavior analysis on multivalued evolution inclusion with projection in Hilbert space, Two-step inertial Bregman alternating minimization algorithm for nonconvex and nonsmooth problems, An interior projected-like subgradient method for mixed variational inequalities
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space
- Combined relaxation methods for variational inequalities
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Interior projection-like methods for monotone variational inequalities
- The Ordered Subsets Mirror Descent Optimization Method with Applications to Tomography
- stochastic quasigradient methods and their application to system optimization†
- Control sequence methods in constrained extremal problems
- Interior Gradient and Proximal Methods for Convex and Conic Optimization
- Convex Analysis
- Brève communication. Résolution numérique d'inégalités variationnelles
- Interior Gradient and Epsilon-Subgradient Descent Methods for Constrained Convex Minimization