A stochastic averaging gradient algorithm with multi‐step communication for distributed optimization
From MaRDI portal
Publication:6054701
DOI10.1002/oca.2973MaRDI QIDQ6054701
Zheng Wang, Zhenyuan Du, Jinhui Hu, Unnamed Author, Huaqing Li, Yu Yan, Liping Feng
Publication date: 25 October 2023
Published in: Optimal Control Applications and Methods (Search for Journal in Brave)
distributed convex optimizationlinear convergence ratestochastic averaging gradientmulti-step communication
Cites Work
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Minimizing finite sums with the stochastic average gradient
- Distributed stochastic subgradient projection algorithms for convex optimization
- Optimal distributed stochastic mirror descent for strongly convex optimization
- DSA: Decentralized Double Stochastic Averaging Gradient Algorithm
- Excess-Risk of Distributed Stochastic Learners
- Stochastic Gradient-Push for Strongly Convex Functions on Time-Varying Directed Graphs
- Noise Reduction by Swarming in Social Foraging
- Explicit Convergence Rate of a Distributed Alternating Direction Method of Multipliers
- Fast Distributed Gradient Methods
- Approximate Projection Methods for Decentralized Optimization With Functional Constraints
- Linear Convergence in Optimization Over Directed Graphs With Row-Stochastic Matrices
- On Projected Stochastic Gradient Descent Algorithm with Weighted Averaging for Least Squares Regression
- Fast Convergence Rates for Distributed Non-Bayesian Learning
- Diffusion Adaptation Strategies for Distributed Optimization and Learning Over Networks
- On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
- Hybrid Random/Deterministic Parallel Algorithms for Convex and Nonconvex Big Data Optimization
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Harnessing Smoothness to Accelerate Distributed Optimization
- Robust Asynchronous Stochastic Gradient-Push: Asymptotically Optimal and Network-Independent Performance for Strongly Convex Functions
- Distributed Subgradient Methods for Multi-Agent Optimization
- Fast Decentralized Nonconvex Finite-Sum Optimization with Recursive Variance Reduction
- FlexPD: A Flexible Framework of First-Order Primal-Dual Algorithms for Distributed Optimization
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- ADD-OPT: Accelerated Distributed Directed Optimization
This page was built for publication: A stochastic averaging gradient algorithm with multi‐step communication for distributed optimization