A distributed proximal gradient method with time-varying delays for solving additive convex optimizations
From MaRDI portal
Publication:6110428
DOI10.1016/j.rinam.2023.100370zbMath1529.90058OpenAlexW4366189978MaRDI QIDQ6110428
Nimit Nimana, Sakrapee Namsak, Narin Petrot
Publication date: 6 July 2023
Published in: Results in Applied Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.rinam.2023.100370
Related Items (1)
Cites Work
- Unnamed Item
- Incrementally updated gradient methods for constrained and regularized optimization
- First-order and stochastic optimization methods for machine learning
- Incremental Subgradient Methods for Nondifferentiable Optimization
- A globally convergent modified multivariate version of the method of moving asymptotes
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- An Incremental Gradient(-Projection) Method with Momentum Term and Adaptive Stepsize Rule
- Decentralized Sparse Signal Recovery for Compressive Sleeping Wireless Sensor Networks
- Distributed Sparse Linear Regression
- A Proximal Gradient Algorithm for Decentralized Composite Optimization
- First-Order Methods in Optimization
- Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods
- A globally convergent modified version of the method of moving asymptotes
- A Distributed Flexible Delay-Tolerant Proximal Gradient Algorithm
- A Decentralized Proximal-Gradient Method With Network Independent Step-Sizes and Separated Convergence Rates
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- Asynchronous Distributed Optimization Via Randomized Dual Proximal Gradient
- A Convergent Incremental Gradient Method with a Constant Step Size
This page was built for publication: A distributed proximal gradient method with time-varying delays for solving additive convex optimizations