On the convergence of exact distributed generalisation and acceleration algorithm for convex optimisation
From MaRDI portal
Publication:5026632
DOI10.1080/00207721.2020.1815098zbMath1483.93017OpenAlexW3082930503MaRDI QIDQ5026632
Zheng Wang, Huaqing Li, Huqiang Cheng
Publication date: 8 February 2022
Published in: International Journal of Systems Science (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/00207721.2020.1815098
accelerationlinear convergence ratesmall gain theoremNesterov methoddistributed optimisationnonidentical step-sizes
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Geometrical convergence rate for distributed optimization with time-varying directed graphs and uncoordinated step-sizes
- Discrete-time dynamic average consensus
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Introductory lectures on convex optimization. A basic course.
- DSA: Decentralized Double Stochastic Averaging Gradient Algorithm
- Fast Distributed Gradient Methods
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Event-Triggered Quantized Communication-Based Distributed Convex Optimization
- Convergence of Asynchronous Distributed Gradient Methods Over Stochastic Networks
- D-ADMM: A Communication-Efficient Distributed Algorithm for Separable Optimization
- On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Stochastic Proximal Gradient Consensus Over Random Networks
- Exact Diffusion for Distributed Optimization and Learning—Part I: Algorithm Development
- Harnessing Smoothness to Accelerate Distributed Optimization
- Distributed Subgradient Methods for Multi-Agent Optimization
- Asynchronous control of discrete-time stochastic bilinear systems with Markovian switchings
- Adaptive iterative learning control for switched nonlinear continuous-time systems
- Consensus control of leader-following nonlinear multi-agent systems with distributed adaptive iterative learning control
- Accelerated Distributed Nesterov Gradient Descent
- Balancing Communication and Computation in Distributed Optimization
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- ADD-OPT: Accelerated Distributed Directed Optimization
- Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization
This page was built for publication: On the convergence of exact distributed generalisation and acceleration algorithm for convex optimisation