Exact Diffusion for Distributed Optimization and Learning—Part II: Convergence Analysis
From MaRDI portal
Publication:4628230
DOI10.1109/TSP.2018.2875883zbMath1414.90278arXiv1702.05142OpenAlexW2659643337MaRDI QIDQ4628230
Ali H. Sayed, Bicheng Ying, Kun Yuan, Xiaochuan Zhao
Publication date: 6 March 2019
Published in: IEEE Transactions on Signal Processing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1702.05142
Linear regression; mixed models (62J05) Convex programming (90C25) Stochastic programming (90C15) Deterministic network models in operations research (90B10)
Related Items (5)
Distributed Optimization Based on Gradient Tracking Revisited: Enhancing Convergence Rate via Surrogation ⋮ Linear convergence of primal-dual gradient methods and their performance in distributed optimization ⋮ Correction-based diffusion LMS algorithms for distributed estimation ⋮ On the linear convergence of two decentralized algorithms ⋮ Projected subgradient based distributed convex optimization with transmission noises
This page was built for publication: Exact Diffusion for Distributed Optimization and Learning—Part II: Convergence Analysis