A dual approach for optimal algorithms in distributed optimization over networks
From MaRDI portal
Publication:5859014
DOI10.1080/10556788.2020.1750013zbMath1464.90062arXiv1809.00710OpenAlexW3016897523MaRDI QIDQ5859014
Angelia Nedić, Soomin Lee, César A. Uribe, Alexander V. Gasnikov
Publication date: 15 April 2021
Published in: Optimization Methods and Software (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1809.00710
convex optimizationdistributed optimizationprimal-dual algorithmsoptimal ratesoptimization over networks
Programming involving graphs or networks (90C35) Convex programming (90C25) Abstract computational complexity for mathematical programming problems (90C60) Nonlinear programming (90C30)
Related Items
Optimal Algorithms for Non-Smooth Distributed Optimization in Networks, A Fenchel dual gradient method enabling regularization for nonsmooth distributed optimization over time-varying networks, Decentralized convex optimization on time-varying networks with application to Wasserstein barycenters, Recent theoretical advances in decentralized distributed convex optimization, Communication-Efficient Distributed Eigenspace Estimation, An Optimal Algorithm for Decentralized Finite-Sum Optimization, Hybrid online learning control in networked multiagent systems: A survey, On arbitrary compression for decentralized consensus and stochastic optimization over directed networks
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Smooth minimization of non-smooth functions
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Primal recovery from consensus-based dual decomposition for distributed convex optimization
- Gradient sliding for composite optimization
- Efficient numerical methods for entropy-linear programming problems
- Gradient methods for minimizing composite functions
- First-order methods of smooth convex optimization with inexact oracle
- Universal gradient methods for convex optimization problems
- Distributed resource allocation on dynamic networks in quadratic time
- Distributed stochastic subgradient projection algorithms for convex optimization
- Primal-dual first-order methods with \({\mathcal {O}(1/\varepsilon)}\) iteration-complexity for cone programming
- Block splitting for distributed optimization
- Optimal scaling of a gradient method for distributed resource allocation
- A fast dual proximal gradient algorithm for convex minimization and applications
- Dual approaches to the minimization of strongly convex functionals with a simple structure under affine constraints
- On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems
- Communication-efficient algorithms for decentralized and stochastic optimization
- Restarting the accelerated coordinate descent method with a rough strong convexity estimate
- Adaptive restart for accelerated gradient schemes
- Analysis of accelerated gossip algorithms
- A Smoothed Dual Approach for Variational Wasserstein Problems
- Randomized Smoothing for Stochastic Optimization
- Double Smoothing Technique for Large-Scale Linearly Constrained Convex Optimization
- Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization
- Distributed Optimization Over Time-Varying Directed Graphs
- Fast Primal-Dual Gradient Method for Strongly Convex Minimization Problems with Linear Constraints
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Convergence and asymptotic agreement in distributed decision problems
- Decentralized Resource Allocation in Dynamic Networks of Agents
- Asymptotic agreement in distributed estimation
- Variational Analysis
- Fast Convergence Rates for Distributed Non-Bayesian Learning
- Optimization and Analysis of Distributed Averaging With Short Node Memory
- Analysis of Max-Consensus Algorithms in Wireless Channels
- A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Harnessing Smoothness to Accelerate Distributed Optimization
- Reaching a Consensus
- Optimal Distributed Convex Optimization on Slowly Time-Varying Graphs
- Application of a Smoothing Technique to Decomposition in Convex Optimization
- Distributed Subgradient Methods for Multi-Agent Optimization
- On Distributed Averaging Algorithms and Quantization Effects
- Accelerated Distributed Nesterov Gradient Descent
- Optimal Algorithms for Non-Smooth Distributed Optimization in Networks
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Random Coordinate Descent Algorithms for Multi-Agent Convex Optimization Over Networks
- An <formula formulatype="inline"><tex Notation="TeX">$O(1/k)$</tex> </formula> Gradient Method for Network Resource Allocation Problems