Subgradient methods for saddle-point problems
From MaRDI portal
Publication:1035898
DOI10.1007/s10957-009-9522-7zbMath1175.90415OpenAlexW2020123437MaRDI QIDQ1035898
Asuman Ozdaglar, Angelia Nedić
Publication date: 4 November 2009
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10957-009-9522-7
convergence rateaveragingapproximate primal solutionsprimal-dual subgradient methodssaddle-point subgradient methods
Related Items
Local saddle points for unconstrained polynomial optimization, Dual averaging with adaptive random projection for solving evolving distributed optimization problems, Convergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappings, Data-driven distributionally robust risk parity portfolio optimization, Primal-dual algorithms for multi-agent structured optimization over message-passing architectures with bounded communication delays, A primal-dual algorithm for nonnegative \(N\)-th order CP tensor decomposition: application to fluorescence spectroscopy data analysis, Primal-dual algorithm for distributed constrained optimization, Distributed design of approximately optimal controller for identical discrete-time multi-agent systems, The saddle point problem of polynomials, Subgradient method for nonconvex nonsmooth optimization, Accelerated Stochastic Algorithms for Convex-Concave Saddle-Point Problems, Distributed consensus-based solver for semi-definite programming: an optimization viewpoint, Golden Ratio Primal-Dual Algorithm with Linesearch, On the emergence of oscillations in distributed resource allocation, A hierarchical algorithm for vehicle-to-grid integration under line capacity constraints, First-Order Methods for Problems with $O$(1) Functional Constraints Can Have Almost the Same Convergence Rate as for Unconstrained Problems, Linear convergence of primal-dual gradient methods and their performance in distributed optimization, Two-timescale recurrent neural networks for distributed minimax optimization, Distributed stochastic subgradient projection algorithms for convex optimization, Primal-dual \(\varepsilon\)-subgradient method for distributed optimization, Fréchet subdifferential calculus for interval-valued functions and its applications in nonsmooth interval optimization, A differentially private distributed optimization method for constrained optimization, A unified single-loop alternating gradient projection algorithm for nonconvex-concave and convex-nonconcave minimax problems, Duality and sensitivity analysis of multistage linear stochastic programs, On the Role of a Market Maker in Networked Cournot Competition, Stability of primal-dual gradient dynamics and applications to network optimization, Linearized generalized ADMM-based algorithm for multi-block linearly constrained separable convex programming in real-world applications, No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization, Robust Accelerated Primal-Dual Methods for Computing Saddle Points, Decentralized optimization over slowly time-varying graphs: algorithms and lower bounds, An inexact primal-dual smoothing framework for large-scale non-bilinear saddle point problems, Randomized Lagrangian stochastic approximation for large-scale constrained stochastic Nash games, Primal-Dual Stochastic Gradient Method for Convex Programs with Many Functional Constraints, Utility/privacy trade-off as regularized optimal transport, A Projection-Based Decomposition Algorithm for Distributed Fast Computation of Control in Microgrids, Saddle-Point Dynamics: Conditions for Asymptotic Stability of Saddle Points, An inexact modified subgradient algorithm for primal-dual problems via augmented Lagrangians, Distributed multi-agent optimization with state-dependent communication, A two-phase algorithm for a variational inequality formulation of equilibrium problems, Approximate dual averaging method for multiagent saddle-point problems with stochastic subgradients, Online First-Order Framework for Robust Convex Optimization, Iteration complexity of inexact augmented Lagrangian methods for constrained convex programming, Convergence Rate of $\mathcal{O}(1/k)$ for Optimistic Gradient and Extragradient Methods in Smooth Convex-Concave Saddle Point Problems, Distributed primal–dual stochastic subgradient algorithms for multi‐agent optimization under inequality constraints, Distributed convergence to Nash equilibria in two-network zero-sum games, A Simple Parallel Algorithm with an $O(1/t)$ Convergence Rate for General Convex Programs, DISTRIBUTED PROXIMAL-GRADIENT METHOD FOR CONVEX OPTIMIZATION WITH INEQUALITY CONSTRAINTS, A golden ratio primal-dual algorithm for structured convex optimization, Convergence rate for consensus with delays, A partially parallel splitting method for multiple-block separable convex programming with applications to robust PCA, A distributed Douglas-Rachford splitting method for multi-block convex minimization problems, A Distributed ADMM-like Method for Resource Sharing over Time-Varying Networks, A stochastic primal-dual method for optimization with conditional value at risk constraints, Stochastic Recursive Inclusions in Two Timescales with Nonadditive Iterate-Dependent Markov Noise, Saddle points of rational functions, Abstract convergence theorem for quasi-convex optimization problems with applications, Convergence rates of subgradient methods for quasi-convex optimization problems, An inexact primal-dual algorithm for semi-infinite programming, Primal-dual stochastic distributed algorithm for constrained convex optimization, Primal convergence from dual subgradient methods for convex optimization, Distributed Bregman-Distance Algorithms for Min-Max Optimization, Primal recovery from consensus-based dual decomposition for distributed convex optimization, Online learning over a decentralized network through ADMM
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Primal-dual subgradient methods for convex problems
- Projected subgradient methods with non-Euclidean distances for non-differentiable convex minimization and variational inequalities
- On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space
- Convergence of some algorithms for convex minimization
- The mathematics of internet congestion control
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Ergodic, primal convergence in dual subgradient schemes for convex programming
- Recovery of primal solutions when using subgradient optimization methods to solve Lagrangian duals of linear programs
- Large-Scale Convex Optimization Via Saddle Point Computation
- Incremental Subgradient Methods for Nondifferentiable Optimization
- Approximate Primal Solutions and Rate Analysis for Dual Subgradient Methods
- Ergodic convergence in subgradient optimization
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- What is the Subdifferential of the Closed Convex Hull of a Function?
- Interior Gradient and Proximal Methods for Convex and Conic Optimization