Convergence results of a nested decentralized gradient method for non-strongly convex problems
From MaRDI portal
Publication:2082236
DOI10.1007/s10957-022-02069-0zbMath1502.90123arXiv2108.02129OpenAlexW3188073013MaRDI QIDQ2082236
Seok-Bae Yun, Woocheol Choi, Doheon Kim
Publication date: 4 October 2022
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2108.02129
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed stochastic subgradient projection algorithms for convex optimization
- Non-negative matrices and Markov chains. 2nd ed
- Distributed stochastic gradient tracking methods
- Linear convergence of first order methods for non-strongly convex optimization
- On the Convergence of Decentralized Gradient Descent
- Distributed Optimization Over Time-Varying Directed Graphs
- Decentralized Sparse Signal Recovery for Compressive Sleeping Wireless Sensor Networks
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Cloud K-SVD: A Collaborative Dictionary Learning Algorithm for Big, Distributed Data
- Harnessing Smoothness to Accelerate Distributed Optimization
- Optimization Methods for Large-Scale Machine Learning
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Distributed Subgradient Methods for Multi-Agent Optimization
- Constrained Consensus and Optimization in Multi-Agent Networks
- Balancing Communication and Computation in Distributed Optimization
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Distributed Subgradient Methods for Convex Optimization Over Random Networks
- On the Convergence of Nested Decentralized Gradient Methods With Multiple Consensus and Gradient Steps
This page was built for publication: Convergence results of a nested decentralized gradient method for non-strongly convex problems