Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters
From MaRDI portal
Publication:5103036
DOI10.1109/TSP.2020.3018317MaRDI QIDQ5103036
Cong Fang, Zhouchen Lin, Wotao Yin, Hu-An Li
Publication date: 23 September 2022
Published in: IEEE Transactions on Signal Processing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1810.01053
Related Items (9)
Reducing the Complexity of Two Classes of Optimization Problems by Inexact Accelerated Proximal Gradient Method ⋮ DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization ⋮ Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs ⋮ Graph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic Optimization ⋮ Decentralized optimization over slowly time-varying graphs: algorithms and lower bounds ⋮ Decentralized personalized federated learning: lower bounds and optimal algorithm for all personalization modes ⋮ EFIX: exact fixed point methods for distributed optimization ⋮ Towards accelerated rates for distributed optimization over time-varying networks ⋮ Recent theoretical advances in decentralized distributed convex optimization
This page was built for publication: Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters