GADMM: Fast and Communication Efficient Framework for Distributed Machine Learning
From MaRDI portal
Publication:4969135
zbMath1498.68234arXiv1909.00047MaRDI QIDQ4969135
Mehdi Bennis, Jihong Park, Amrit Singh Bedi, Anis Elgabli, Vaneet Aggarwal
Publication date: 5 October 2020
Full work available at URL: https://arxiv.org/abs/1909.00047
Convex programming (90C25) Learning and adaptive systems in artificial intelligence (68T05) Distributed systems (68M14)
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Minimizing finite sums with the stochastic average gradient
- A dual algorithm for the solution of nonlinear variational problems via finite element approximation
- Parallel multi-block ADMM with \(o(1/k)\) convergence
- Distributed Optimization Over Time-Varying Directed Graphs
- Distributed Constrained Optimization by Consensus-Based Primal-Dual Perturbation Method
- Fast Distributed Gradient Methods
- The N-City Travelling Salesman Problem: Statistical Mechanics and the Metropolis Algorithm
- On the capacity of channels with Gaussian and non-Gaussian noise
- Some Simple Applications of the Travelling Salesman Problem
- Multi-Agent Distributed Optimization via Inexact Consensus ADMM
- A Proximal Gradient Algorithm for Decentralized Composite Optimization
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Proximity Without Consensus in Online Multiagent Optimization
- Asynchronous Saddle Point Algorithm for Stochastic Optimization in Heterogeneous Networks
- Distributed Subgradient Methods for Multi-Agent Optimization
- Communication-Censored ADMM for Decentralized Consensus Optimization
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- A Convergent Incremental Gradient Method with a Constant Step Size
- The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent
This page was built for publication: GADMM: Fast and Communication Efficient Framework for Distributed Machine Learning