Convergence of Asynchronous Distributed Gradient Methods Over Stochastic Networks
From MaRDI portal
Publication:4567086
DOI10.1109/TAC.2017.2730481zbMath1390.90433OpenAlexW2737743075MaRDI QIDQ4567086
Xie, Lihua, Shanying Zhu, Jin-Ming Xu, Yeng Chai Soh
Publication date: 27 June 2018
Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1109/tac.2017.2730481
Related Items (21)
Distributed Optimization Based on Gradient Tracking Revisited: Enhancing Convergence Rate via Surrogation ⋮ Gradient-free distributed optimization with exact convergence ⋮ Decentralized proximal splitting algorithms for composite constrained convex optimization ⋮ A unitary distributed subgradient method for multi-agent optimization with different coupling sources ⋮ Tracking-ADMM for distributed constraint-coupled optimization ⋮ High-dimensional \(M\)-estimation for Byzantine-robust decentralized learning ⋮ A resilient distributed optimization strategy against false data injection attacks ⋮ Distributed continuous‐time constrained convex optimization with general time‐varying cost functions ⋮ Distributed convex optimization as a tool for solving \(f\)-consensus problems ⋮ DIMIX: Diminishing Mixing for Sloppy Agents ⋮ Dynamics based privacy preservation in decentralized optimization ⋮ Gradient-tracking based differentially private distributed optimization with enhanced optimization accuracy ⋮ Synchronous distributed ADMM for consensus convex optimization problems with self-loops ⋮ Distributed algorithms for computing a fixed point of multi-agent nonexpansive operators ⋮ Distributed nonsmooth convex optimization over Markovian switching random networks with two step-sizes ⋮ Parallel alternating direction method of multipliers ⋮ Distributed decision-coupled constrained optimization via proximal-tracking ⋮ Fully asynchronous policy evaluation in distributed reinforcement learning over networks ⋮ Surplus-based accelerated algorithms for distributed optimization over directed networks ⋮ Triggered gradient tracking for asynchronous distributed optimization ⋮ On the convergence of exact distributed generalisation and acceleration algorithm for convex optimisation
This page was built for publication: Convergence of Asynchronous Distributed Gradient Methods Over Stochastic Networks