Distributed Optimization Based on Gradient Tracking Revisited: Enhancing Convergence Rate via Surrogation
From MaRDI portal
Publication:5071108
DOI10.1137/19M1259973MaRDI QIDQ5071108
Gesualdo Scutari, Amir Daneshmand, Ying Sun
Publication date: 20 April 2022
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1905.02637
machine learningdistributed optimizationlinear ratesurrogate functionsgradient trackingstatistical similarity
Analysis of algorithms and problem complexity (68Q25) Graph theory (including graph drawing) in computer science (68R10) Computer graphics; computational geometry (digital and algorithmic aspects) (68U05)
Related Items
On the Divergence of Decentralized Nonconvex Optimization, Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization, Optimal data splitting in distributed optimization for machine learning, EFIX: exact fixed point methods for distributed optimization, Recent theoretical advances in decentralized distributed convex optimization
Uses Software
Cites Work
- Unnamed Item
- Parallel and distributed successive convex approximation methods for big-data optimization
- Communication-efficient distributed optimization of self-concordant empirical loss
- Distributed nonconvex constrained optimization over time-varying digraphs
- On the Convergence of Decentralized Gradient Descent
- Linear Convergence Rate of a Class of Distributed Augmented Lagrangian Algorithms
- Distributed Optimization Over Time-Varying Directed Graphs
- Linear Convergence in Optimization Over Directed Graphs With Row-Stochastic Matrices
- Convergence of Asynchronous Distributed Gradient Methods Over Stochastic Networks
- Diffusion Least-Mean Squares Over Adaptive Networks: Formulation and Performance Analysis
- Cooperative Convex Optimization in Networked Systems: Augmented Lagrangian Algorithms With Directed Gossip Communication
- On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
- Parallel Selective Algorithms for Nonconvex Big Data Optimization
- DLM: Decentralized Linearized Alternating Direction Method of Multipliers
- A Proximal Gradient Algorithm for Decentralized Composite Optimization
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Relatively Smooth Convex Optimization by First-Order Methods, and Applications
- Parallel and Distributed Methods for Constrained Nonconvex Optimization—Part I: Theory
- DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers
- Exact Diffusion for Distributed Optimization and Learning—Part II: Convergence Analysis
- Harnessing Smoothness to Accelerate Distributed Optimization
- Extrapush for Convex Smooth Decentralized Optimization Over Directed Networks
- Distributed Subgradient Methods for Multi-Agent Optimization
- Constrained Consensus and Optimization in Multi-Agent Networks
- Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis
- Distributed Heavy-Ball: A Generalization and Acceleration of First-Order Methods With Gradient Tracking
- Balancing Communication and Computation in Distributed Optimization
- A Decentralized Proximal-Gradient Method With Network Independent Step-Sizes and Separated Convergence Rates
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- ADD-OPT: Accelerated Distributed Directed Optimization
- Decentralized Optimization Over Time-Varying Directed Graphs With Row and Column-Stochastic Matrices
- Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization