On the convergence of conditional \(\varepsilon\)-subgradient methods for convex programs and convex-concave saddle-point problems.
From MaRDI portal
Publication:1410305
DOI10.1016/S0377-2217(02)00629-XzbMath1053.90107OpenAlexW2011746047MaRDI QIDQ1410305
Torbjörn Larsson, Michael Patriksson, Ann-Brith Strömberg
Publication date: 14 October 2003
Published in: European Journal of Operational Research (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/s0377-2217(02)00629-x
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Methods of reduced gradient type (90C52)
Related Items (10)
Inexact subgradient methods for quasi-convex optimization problems ⋮ On the convergence of primal-dual hybrid gradient algorithms for total variation image restoration ⋮ Incremental subgradient algorithms with dynamic step sizes for separable convex optimizations ⋮ Subgradient algorithms on Riemannian manifolds of lower bounded curvatures ⋮ Ergodic, primal convergence in dual subgradient schemes for convex programming. II: The case of inconsistent primal problems ⋮ A Subgradient Method Based on Gradient Sampling for Solving Convex Optimization Problems ⋮ Modified Fejér sequences and applications ⋮ A merit function approach to the subgradient method with averaging ⋮ Scaling Techniques for $\epsilon$-Subgradient Methods ⋮ Weak subgradient method for solving nonsmooth nonconvex optimization problems
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- New bundle methods for solving Lagrangian relaxation dual problems
- Lagrangian dual ascent by generalized linear programming
- Conditional subgradient optimization -- theory and applications
- Error stability properties of generalized gradient-type algorithms
- Surrogate gradient algorithm for Lagrangian relaxation
- On the projected subgradient method for nonsmooth convex optimization in a Hilbert space
- Convergence of some algorithms for convex minimization
- A primal-dual algorithm for monotropic programming and its application to network optimization
- The volume algorithm: Producing primal solutions with a subgradient method
- Proximal level bundle methods for convex nondifferentiable optimization, saddle-point problems and variational inequalities
- Ergodic, primal convergence in dual subgradient schemes for convex programming
- Recovery of primal solutions when using subgradient optimization methods to solve Lagrangian duals of linear programs
- Large-Scale Convex Optimization Via Saddle Point Computation
- Global Optimality Conditions for Discrete and Nonconvex Optimization—With Applications to Lagrangian Heuristics and Column Generation
- Primal-Dual Projected Gradient Algorithms for Extended Linear-Quadratic Programming
- A Lagrangean Relaxation Scheme for Structured Linear Programs With Application To Multicommodity Network Flows
- A dual scheme for traffic assignment problems
- Ergodic convergence in subgradient optimization
- Minimization of unsmooth functionals
This page was built for publication: On the convergence of conditional \(\varepsilon\)-subgradient methods for convex programs and convex-concave saddle-point problems.