Composite optimization with coupling constraints via dual proximal gradient method with applications to asynchronous networks
From MaRDI portal
Publication:6085458
DOI10.1002/rnc.6048zbMath1528.93018arXiv2102.12797OpenAlexW4220787688MaRDI QIDQ6085458
Publication date: 12 December 2023
Published in: International Journal of Robust and Nonlinear Control (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2102.12797
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Primal-dual subgradient methods for convex problems
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Primal recovery from consensus-based dual decomposition for distributed convex optimization
- Incremental proximal methods for large scale convex optimization
- A fast dual proximal gradient algorithm for convex minimization and applications
- Dual decomposition for multi-agent distributed optimization with coupling constraints
- On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems
- Global convergence of ADMM in nonconvex nonsmooth optimization
- Asynchronous parallel algorithms for nonconvex optimization
- Consensus in the network with uniform constant communication delay
- Incremental Subgradient Methods for Nondifferentiable Optimization
- Consensus in multi-agent systems with communication constraints
- Distributed Constrained Optimization by Consensus-Based Primal-Dual Perturbation Method
- Bayesian lasso regression
- Distributed zero‐gradient‐sum algorithm for convex optimization with time‐varying communication delays and switching networks
- Distributed Proximal Gradient Algorithm for Partially Asynchronous Computer Clusters
- Speeding Up Distributed Machine Learning Using Codes
- Asynchronous Multiagent Primal-Dual Optimization
- Multi-Agent Distributed Optimization via Inexact Consensus ADMM
- A Proximal Gradient Algorithm for Decentralized Composite Optimization
- Asynchronous Distributed ADMM for Large-Scale Optimization—Part I: Algorithm and<?Pub _newline ?>Convergence Analysis
- A Proximal Dual Consensus ADMM Method for Multi-Agent Constrained Optimization
- A Distributed, Asynchronous, and Incremental Algorithm for Nonconvex Optimization: An ADMM Approach
- Constraint-Coupled Distributed Optimization: A Relaxation and Duality Approach
- Stability of Open Multiagent Systems and Applications to Dynamic Consensus
- A Generalized Accelerated Composite Gradient Method: Uniting Nesterov's Fast Gradient Method and FISTA
- A Duality-Based Approach for Distributed Min–Max Optimization
- Gradient‐free method for distributed multi‐agent optimization via push‐sum algorithms
- Asynchronous Distributed Optimization Via Randomized Dual Proximal Gradient
- Distributed Saddle-Point Subgradient Algorithms With Laplacian Averaging
- On Distributed Convex Optimization Under Inequality and Equality Constraints
- Distributed Time-Varying Quadratic Optimization for Multiple Agents Under Undirected Graphs
- Distributed primal–dual stochastic subgradient algorithms for multi‐agent optimization under inequality constraints
- Convex Analysis
- Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization
- Distributed proximal‐gradient algorithms for nonsmooth convex optimization of second‐order multiagent systems