scientific article; zbMATH DE number 7626728
From MaRDI portal
Publication:5053207
Publication date: 6 December 2022
Full work available at URL: https://jmlr.csail.mit.edu/papers/v22/20-147.html
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
convergence analysisdistributed optimizationfederated learningcommunication-efficient trainingdistributed SGD with local updates
Related Items
Privacy-preserving federated learning on lattice quantization, Communication-efficient and privacy-preserving large-scale federated learning counteracting heterogeneity
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Eigenvalues of nonnegative symmetric matrices
- Oracle complexity of second-order methods for smooth convex optimization
- On the Convergence of Decentralized Gradient Descent
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Optimization Methods for Large-Scale Machine Learning
- Distributed Subgradient Methods for Multi-Agent Optimization
- Variance-Reduced Decentralized Stochastic Optimization With Accelerated Convergence
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Optimal Distributed Online Prediction using Mini-Batches
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Push–Pull Gradient Methods for Distributed Optimization in Networks
- Lower bounds for non-convex stochastic optimization