A Distributed Flexible Delay-Tolerant Proximal Gradient Algorithm
From MaRDI portal
Publication:5220423
DOI10.1137/18M1194699zbMath1441.90120arXiv1806.09429MaRDI QIDQ5220423
Franck Iutzeler, Konstantin Mishchenko, Jérôme Malick
Publication date: 23 March 2020
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1806.09429
Related Items
Block delayed Majorize-Minimize subspace algorithm for large scale image restoration *, Nonsmoothness in machine learning: specific structure, proximal identification, and applications, A second-order accelerated neurodynamic approach for distributed convex optimization, A distributed proximal gradient method with time-varying delays for solving additive convex optimizations, Distributed Learning with Sparse Communications by Identification, Solving composite fixed point problems with block updates, Proximal Gradient Methods with Adaptive Subspace Sampling
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On unbounded delays in asynchronous parallel fixed-point algorithms
- ARock: An Algorithmic Framework for Asynchronous Parallel Coordinate Updates
- Optimization with Sparsity-Inducing Penalties
- An Asynchronous Mini-Batch Algorithm for Regularized Stochastic Optimization
- Distributed optimization with arbitrary local solvers
- Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods
- Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate
- Convex analysis and monotone operator theory in Hilbert spaces