scientific article; zbMATH DE number 7307473
From MaRDI portal
Publication:5149230
Yuejie Chi, Boyue Li, Shicong Cen, Yuxin Chen
Publication date: 8 February 2021
Full work available at URL: https://arxiv.org/abs/1909.05844
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
variance reductiondecentralized optimizationfederated learningcommunication efficiencygradient tracking
Related Items (8)
Distributed Optimization Based on Gradient Tracking Revisited: Enhancing Convergence Rate via Surrogation ⋮ DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization ⋮ Localization and approximations for distributed non-convex optimization ⋮ Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization ⋮ Unnamed Item ⋮ Unnamed Item ⋮ An Optimal Algorithm for Decentralized Finite-Sum Optimization ⋮ Fast Decentralized Nonconvex Finite-Sum Optimization with Recursive Variance Reduction
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Chebyshev acceleration of iterative refinement
- Communication-efficient algorithms for decentralized and stochastic optimization
- Distributed nonconvex constrained optimization over time-varying digraphs
- Fast linear iterations for distributed averaging
- DSA: Decentralized Double Stochastic Averaging Gradient Algorithm
- A Proximal Gradient Algorithm for Decentralized Composite Optimization
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Variance-Reduced Stochastic Learning by Networked Agents Under Random Reshuffling
- Exact Diffusion for Distributed Optimization and Learning—Part I: Algorithm Development
- Harnessing Smoothness to Accelerate Distributed Optimization
- Katyusha: the first direct acceleration of stochastic gradient methods
- Constrained Consensus and Optimization in Multi-Agent Networks
- Convergence of Distributed Stochastic Variance Reduced Methods Without Sampling Extra Data
- Optimal Algorithms for Non-Smooth Distributed Optimization in Networks
- A Decentralized Proximal-Gradient Method With Network Independent Step-Sizes and Separated Convergence Rates
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- ADD-OPT: Accelerated Distributed Directed Optimization
This page was built for publication: