Optimal Algorithms for Non-Smooth Distributed Optimization in Networks
From MaRDI portal
Publication:5214253
zbMath1446.90127arXiv1806.00291MaRDI QIDQ5214253
Yin Tat Lee, Francis Bach, Sébastien Bubeck, Laurent Massoulié, Kevin Scaman
Publication date: 7 February 2020
Full work available at URL: https://arxiv.org/abs/1806.00291
Related Items
Composite optimization for the resource allocation problem, An iteratively regularized stochastic gradient method for estimating a random parameter in a stochastic PDE. A variational inequality approach, Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs, Optimal Methods for Convex Risk-Averse Distributed Optimization, Decentralized saddle-point problems with different constants of strong convexity and strong concavity, Decentralized personalized federated learning: lower bounds and optimal algorithm for all personalization modes, Unnamed Item, Recent theoretical advances in decentralized distributed convex optimization, Unnamed Item, Communication-Efficient Distributed Eigenspace Estimation, A regularized stochastic subgradient projection method for an optimal control problem in a stochastic partial differential equation, Unnamed Item, A dual approach for optimal algorithms in distributed optimization over networks
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Chebyshev acceleration of iterative refinement
- \(\lambda_ 1\), isoperimetric inequalities for graphs, and superconcentrators
- Introductory lectures on convex optimization. A basic course.
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
- Communication-efficient algorithms for decentralized and stochastic optimization
- Mirror Prox algorithm for multi-term composite minimization and semi-separable problems
- DSA: Decentralized Double Stochastic Averaging Gradient Algorithm
- Randomized Smoothing for Stochastic Optimization
- Stochastic Gradient-Push for Strongly Convex Functions on Time-Varying Directed Graphs
- Linear Convergence Rate of a Class of Distributed Augmented Lagrangian Algorithms
- Distributed Optimization Over Time-Varying Directed Graphs
- Fast Distributed Gradient Methods
- DEXTRA: A Fast Algorithm for Optimization Over Directed Graphs
- On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers
- Distributed Subgradient Methods for Multi-Agent Optimization
- An Optimal Algorithm for Decentralized Finite-Sum Optimization
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Proximité et dualité dans un espace hilbertien
- Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions
- Push–Pull Gradient Methods for Distributed Optimization in Networks
- A dual approach for optimal algorithms in distributed optimization over networks