On the Convergence of Decentralized Gradient Descent
From MaRDI portal
Publication:2821798
DOI10.1137/130943170zbMath1345.90068arXiv1310.7063OpenAlexW1616857247MaRDI QIDQ2821798
Kun Yuan, Wotao Yin, Qing Ling
Publication date: 23 September 2016
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1310.7063
Numerical mathematical programming methods (65K05) Convex programming (90C25) Nonlinear programming (90C30)
Related Items
Discussion of the paper ‘A review of distributed statistical inference’, Blended dynamics approach to distributed optimization: sum convexity and convergence rate, On the Divergence of Decentralized Nonconvex Optimization, Distributed smooth optimisation with event-triggered proportional-integral algorithms, Distributed Optimization Based on Gradient Tracking Revisited: Enhancing Convergence Rate via Surrogation, A unitary distributed subgradient method for multi-agent optimization with different coupling sources, Subgradient averaging for multi-agent optimisation with different constraint sets, A distributed methodology for approximate uniform global minimum sharing, Differentially private distributed optimization for multi-agent systems via the augmented Lagrangian algorithm, Distributed constrained optimization for multi-agent systems over a directed graph with piecewise stepsize, An accelerated exact distributed first-order algorithm for optimization over directed networks, Efficient and Reliable Overlay Networks for Decentralized Federated Learning, High-dimensional \(M\)-estimation for Byzantine-robust decentralized learning, A distributed accelerated optimization algorithm over time‐varying directed graphs with uncoordinated step‐sizes, Correction-based diffusion LMS algorithms for distributed estimation, A divide-and-conquer algorithm for distributed optimization on networks, A decentralized smoothing quadratic regularization algorithm for composite consensus optimization with non-Lipschitz singularities, Distributed Algorithms with Finite Data Rates that Solve Linear Equations, Neurodynamic approaches for multi-agent distributed optimization, Learning Coefficient Heterogeneity over Networks: A Distributed Spanning-Tree-Based Fused-Lasso Regression, Understanding a Class of Decentralized and Federated Optimization Algorithms: A Multirate Feedback Control Perspective, DIMIX: Diminishing Mixing for Sloppy Agents, Dynamics based privacy preservation in decentralized optimization, A variance-reduced stochastic gradient tracking algorithm for decentralized optimization with orthogonality constraints, Distributed optimal frequency control under communication packet loss in multi-agent electric energy systems, Event-triggered primal-dual design with linear convergence for distributed nonstrongly convex optimization, EFIX: exact fixed point methods for distributed optimization, Network Gradient Descent Algorithm for Decentralized Federated Learning, Golden ratio proximal gradient ADMM for distributed composite convex optimization, Using Witten Laplacians to Locate Index-1 Saddle Points, Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization, Distributed algorithms for computing a fixed point of multi-agent nonexpansive operators, Second-Order Guarantees of Distributed Gradient Algorithms, Recent theoretical advances in decentralized distributed convex optimization, Revisiting EXTRA for Smooth Distributed Optimization, Distributed consensus-based multi-agent convex optimization via gradient tracking technique, Decentralized Consensus Algorithm with Delayed and Stochastic Gradients, Unnamed Item, Unnamed Item, Unnamed Item, On the linear convergence of two decentralized algorithms, ARock: An Algorithmic Framework for Asynchronous Parallel Coordinate Updates, Projected subgradient based distributed convex optimization with transmission noises, Convergence results of a nested decentralized gradient method for non-strongly convex problems, Primal-dual stochastic distributed algorithm for constrained convex optimization, EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization, Adaptive online distributed optimization in dynamic environments, A Kaczmarz Algorithm for Solving Tree Based Distributed Systems of Equations, Fast Decentralized Nonconvex Finite-Sum Optimization with Recursive Variance Reduction, Online learning over a decentralized network through ADMM, Newton-like Method with Diagonal Correction for Distributed Optimization
Cites Work
- Distributed stochastic subgradient projection algorithms for convex optimization
- Fast linearized Bregman iteration for compressive sensing and sparse denoising
- Error forgetting of Bregman iteration
- Augmented $\ell_1$ and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm
- Fast Distributed Gradient Methods
- Analysis and Generalizations of the Linearized Bregman Method
- Exact Regularization of Convex Programs
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Consensus in Ad Hoc WSNs With Noisy Links—Part I: Distributed Estimation of Deterministic Signals
- Decentralized Sparse Signal Recovery for Compressive Sleeping Wireless Sensor Networks
- Distributed Spectrum Sensing for Cognitive Radio Networks by Exploiting Sparsity
- Group-Lasso on Splines for Spectrum Cartography
- Distributed Basis Pursuit
- Diffusion Adaptation Strategies for Distributed Optimization and Learning Over Networks
- Decentralized Jointly Sparse Optimization by Reweighted $\ell_{q}$ Minimization
- Decentralized Dynamic Optimization Through the Alternating Direction Method of Multipliers
- Consensus and Cooperation in Networked Multi-Agent Systems
- Fastest Mixing Markov Chain on a Graph
- Distributed Subgradient Methods for Multi-Agent Optimization
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Bregman Iterative Algorithms for $\ell_1$-Minimization with Applications to Compressed Sensing
- Convex analysis and monotone operator theory in Hilbert spaces