Communication-efficient algorithms for decentralized and stochastic optimization
DOI10.1007/s10107-018-1355-4zbMath1437.90125arXiv1701.03961OpenAlexW2963855576WikidataQ128829704 ScholiaQ128829704MaRDI QIDQ2297648
Publication date: 20 February 2020
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1701.03961
complexitystochastic programmingprimal-dual methoddecentralized optimizationnonsmooth functionscommunication efficientdecentralized machine learning
Convex programming (90C25) Stochastic programming (90C15) Numerical methods based on nonlinear programming (49M37) Decentralized systems (93A14)
Related Items (20)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Gradient sliding for composite optimization
- On the ergodic convergence rates of a first-order primal-dual algorithm
- An optimal method for stochastic composite optimization
- Distributed stochastic subgradient projection algorithms for convex optimization
- Incremental proximal methods for large scale convex optimization
- Validation analysis of mirror descent stochastic approximation method
- An optimal randomized incremental gradient method
- A first-order primal-dual algorithm for convex problems with applications to imaging
- On the $O(1/n)$ Convergence Rate of the Douglas–Rachford Alternating Direction Method
- Distributed Optimization Over Time-Varying Directed Graphs
- Fast Distributed Gradient Methods
- On the Complexity of the Hybrid Proximal Extragradient Method for the Iterates and the Ergodic Mean
- Complexity of Variants of Tseng's Modified F-B Splitting and Korpelevich's Methods for Hemivariational Inequalities with Applications to Saddle-point and Convex Optimization Problems
- Incremental Stochastic Subgradient Algorithms for Convex Optimization
- Robust Stochastic Approximation Approach to Stochastic Programming
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Convergence Rate of Distributed ADMM Over Networks
- Distributed Linearized Alternating Direction Method of Multipliers for Composite Convex Consensus Optimization
- On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
- Multi-Agent Distributed Optimization via Inexact Consensus ADMM
- A Proximal Gradient Algorithm for Decentralized Composite Optimization
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Stochastic Proximal Gradient Consensus Over Random Networks
- DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers
- Harnessing Smoothness to Accelerate Distributed Optimization
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Distributed Subgradient Methods for Multi-Agent Optimization
- Optimal Primal-Dual Methods for a Class of Saddle Point Problems
- An Accelerated Linearized Alternating Direction Method of Multipliers
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- Coordination of groups of mobile autonomous agents using nearest neighbor rules
- Iteration-Complexity of Block-Decomposition Algorithms and the Alternating Direction Method of Multipliers
- Asynchronous Broadcast-Based Convex Optimization Over a Network
- On Distributed Convex Optimization Under Inequality and Equality Constraints
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms
This page was built for publication: Communication-efficient algorithms for decentralized and stochastic optimization