A randomized incremental primal-dual method for decentralized consensus optimization
From MaRDI portal
Publication:4995044
DOI10.1142/S0219530519410082zbMath1470.90049OpenAlexW2983794651MaRDI QIDQ4995044
Chenxi Chen, Yunmei Chen, Xiaojing Ye
Publication date: 23 June 2021
Published in: Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1142/s0219530519410082
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Nonlinear programming (90C30)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Minimizing finite sums with the stochastic average gradient
- Communication-efficient algorithms for decentralized and stochastic optimization
- A Class of Randomized Primal-Dual Algorithms for Distributed Optimization
- DSA: Decentralized Double Stochastic Averaging Gradient Algorithm
- Convergence Analysis of Alternating Direction Method of Multipliers for a Family of Nonconvex Problems
- A Coordinate Descent Primal-Dual Algorithm and Application to Distributed Asynchronous Optimization
- Linear Convergence Rate of a Class of Distributed Augmented Lagrangian Algorithms
- Distributed Optimization Over Time-Varying Directed Graphs
- Distributed Optimization-Based Control of Multi-Agent Networks in Complex Environments
- Distributed Linearized Alternating Direction Method of Multipliers for Composite Convex Consensus Optimization
- Consensus in Ad Hoc WSNs With Noisy Links—Part I: Distributed Estimation of Deterministic Signals
- Fast Consensus by the Alternating Direction Multipliers Method
- D-ADMM: A Communication-Efficient Distributed Algorithm for Separable Optimization
- On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
- Multi-Agent Distributed Optimization via Inexact Consensus ADMM
- DLM: Decentralized Linearized Alternating Direction Method of Multipliers
- Stochastic Proximal Gradient Consensus Over Random Networks
- Decentralized Consensus Algorithm with Delayed and Stochastic Gradients
- Random Gradient Extrapolation for Distributed and Stochastic Optimization
- Distributed Subgradient Methods for Multi-Agent Optimization
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Asynchronous Broadcast-Based Convex Optimization Over a Network
- A Convergent Incremental Gradient Method with a Constant Step Size
- Stochastic Quasi-Fejér Block-Coordinate Fixed Point Iterations with Random Sweeping
This page was built for publication: A randomized incremental primal-dual method for decentralized consensus optimization