Stochastic distributed learning with gradient quantization and double-variance reduction
From MaRDI portal
Publication:5882226
DOI10.1080/10556788.2022.2117355OpenAlexW4297383547MaRDI QIDQ5882226
Samuel Horváth, Konstantin Mishchenko, Dmitry P. Kovalev, Sebastian U. Stich, Peter Richtárik
Publication date: 15 March 2023
Published in: Optimization Methods and Software (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/10556788.2022.2117355
stochastic optimizationgradient methodsvariance reductiondistributed optimizationfederated learningcommunication compression
Related Items (4)
Privacy-preserving federated learning on lattice quantization ⋮ Unified analysis of stochastic gradient methods for composite convex and smooth optimization ⋮ Nonlinear Gradient Mappings and Stochastic Optimization: A General Framework with Applications to Heavy-Tail Noise ⋮ Compression and data similarity: combination of two techniques for communication-efficient solving of distributed variational inequalities
Uses Software
Cites Work
- Distributed Coordinate Descent Method for Learning with Big Data
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Robust Stochastic Approximation Approach to Stochastic Programming
- Distributed optimization with arbitrary local solvers
- Optimal Distributed Online Prediction using Mini-Batches
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- A Stochastic Approximation Method
This page was built for publication: Stochastic distributed learning with gradient quantization and double-variance reduction