Optimal gradient tracking for decentralized optimization
From MaRDI portal
Publication:6608029
DOI10.1007/s10107-023-01997-7MaRDI QIDQ6608029
Ming Yan, Unnamed Author, Lei Shi, Shi Pu
Publication date: 19 September 2024
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Cites Work
- Unnamed Item
- Unnamed Item
- Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
- On the Convergence of Decentralized Gradient Descent
- Fast Distributed Gradient Methods
- Revisiting EXTRA for Smooth Distributed Optimization
- Chebyshev Acceleration Techniques for Solving Nonsymmetric Eigenvalue Problems
- Multi-fidelity optimization via surrogate modelling
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- Distributed asynchronous computation of fixed points
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- On Projected Stochastic Gradient Descent Algorithm with Weighted Averaging for Least Squares Regression
- Fast Convergence Rates for Distributed Non-Bayesian Learning
- Convergence of Asynchronous Distributed Gradient Methods Over Stochastic Networks
- Distributed Recursive Least-Squares: Stability and Performance Analysis
- Multi-Agent Distributed Optimization via Inexact Consensus ADMM
- DLM: Decentralized Linearized Alternating Direction Method of Multipliers
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Exact Diffusion for Distributed Optimization and Learning—Part I: Algorithm Development
- Harnessing Smoothness to Accelerate Distributed Optimization
- Optimal Distributed Convex Optimization on Slowly Time-Varying Graphs
- Distributed Subgradient Methods for Multi-Agent Optimization
- Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters
- Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis
- Accelerated Distributed Nesterov Gradient Descent
- A Decentralized Proximal-Gradient Method With Network Independent Step-Sizes and Separated Convergence Rates
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Distributed Learning Algorithms for Spectrum Sharing in Spatial Random Access Wireless Networks
- ADD-OPT: Accelerated Distributed Directed Optimization
- Decentralized Optimization Over Time-Varying Directed Graphs With Row and Column-Stochastic Matrices
- Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization
- Push–Pull Gradient Methods for Distributed Optimization in Networks
This page was built for publication: Optimal gradient tracking for decentralized optimization