scientific article; zbMATH DE number 7307490
From MaRDI portal
Publication:5149264
Sebastian U. Stich, Sai Praneeth Karimireddy
Publication date: 8 February 2021
Full work available at URL: https://arxiv.org/abs/1909.05350
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
optimizationmachine learningstochastic gradient descentgradient compressionerror-feedbackdelayed gradientserror-compensationlocal SGD
Related Items
Efficient and Reliable Overlay Networks for Decentralized Federated Learning, Faster Rates for Compressed Federated Learning with Client-Variance Reduction
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- New method of stochastic approximation type
- Introductory lectures on convex optimization. A basic course.
- Linear convergence of first order methods for non-strongly convex optimization
- Cubic regularization of Newton method and its global performance
- An Asynchronous Mini-Batch Algorithm for Regularized Stochastic Optimization
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Robust Stochastic Approximation Approach to Stochastic Programming
- Gradient Descent Learns Linear Dynamical Systems
- Perturbed Iterate Analysis for Asynchronous Stochastic Optimization
- Improved asynchronous parallel optimization analysis for stochastic incremental methods
- Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
- Optimization Methods for Large-Scale Machine Learning
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Optimal Distributed Online Prediction using Mini-Batches
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- A Stochastic Approximation Method
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm