A Decentralized Proximal-Gradient Method With Network Independent Step-Sizes and Separated Convergence Rates
From MaRDI portal
Publication:5238993
DOI10.1109/TSP.2019.2926022WikidataQ127539842 ScholiaQ127539842MaRDI QIDQ5238993
Publication date: 28 October 2019
Published in: IEEE Transactions on Signal Processing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1704.07807
Related Items
Distributed smooth optimisation with event-triggered proportional-integral algorithms, Distribution agnostic Bayesian compressive sensing with incremental support estimation, Distributed Optimization Based on Gradient Tracking Revisited: Enhancing Convergence Rate via Surrogation, Decentralized proximal splitting algorithms for composite constrained convex optimization, Decentralized ADMM with compressed and event-triggered communication, An asynchronous subgradient-proximal method for solving additive convex optimization problems, A distributed proximal gradient method with time-varying delays for solving additive convex optimizations, A divide-and-conquer algorithm for distributed optimization on networks, Understanding a Class of Decentralized and Federated Optimization Algorithms: A Multirate Feedback Control Perspective, A Unified Framework for Continuous-Time Unconstrained Distributed Optimization, Golden ratio proximal gradient ADMM for distributed composite convex optimization, Proximal nested primal-dual gradient algorithms for distributed constraint-coupled composite optimization, An Optimal Algorithm for Decentralized Finite-Sum Optimization, New convergence analysis of a primal-dual algorithm with large stepsizes, On the linear convergence of two decentralized algorithms, Distributed decision-coupled constrained optimization via proximal-tracking, Unnamed Item, Distributed composite optimization for multi-agent systems with asynchrony, Fast Decentralized Nonconvex Finite-Sum Optimization with Recursive Variance Reduction