Recent theoretical advances in decentralized distributed convex optimization
From MaRDI portal
Publication:6354638
DOI10.1007/978-3-031-00832-0_8zbMath1527.90159arXiv2011.13259MaRDI QIDQ6354638
Darina Dvinskikh, Aleksandr Beznosikov, Alexander Rogozin, Eduard Gorbunov, A. V. Gasnikov
Publication date: 26 November 2020
Related Items (2)
Decentralized saddle-point problems with different constants of strong convexity and strong concavity ⋮ Decentralized convex optimization on time-varying networks with application to Wasserstein barycenters
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Gradient sliding for composite optimization
- First-order methods of smooth convex optimization with inexact oracle
- An optimal method for stochastic composite optimization
- Gradient-free proximal methods with inexact oracle for convex stochastic nonsmooth optimization problems on the simplex
- Minimizing finite sums with the stochastic average gradient
- Lectures on convex optimization
- Stochastic intermediate gradient method for convex problems with stochastic inexact oracle
- Parametric estimation. Finite sample theory
- Monotone (nonlinear) operators in Hilbert space
- Decomposition into functions in the minimization problem
- First- and second-order diffusive methods for rapid, coarse, distributed load balancing
- Introductory lectures on convex optimization. A basic course.
- Entropic optimal transport is maximum-likelihood deconvolution
- Dual approaches to the minimization of strongly convex functionals with a simple structure under affine constraints
- Universal method for stochastic composite optimization problems
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Gradient-free method for nonsmooth distributed optimization
- Distributed stochastic gradient tracking methods
- Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
- Algorithms for stochastic optimization with function or expectation constraints
- Gradient methods for problems with inexact model of the objective
- Accelerated and unaccelerated stochastic gradient descent in model generality
- Alternating minimization methods for strongly convex optimization
- On the upper bound for the expectation of the norm of a vector uniformly distributed on the sphere and the phenomenon of concentration of uniform measure on the sphere
- Communication-efficient algorithms for decentralized and stochastic optimization
- First-order and stochastic optimization methods for machine learning
- Random gradient-free minimization of convex functions
- Mirror descent and convex optimization problems with non-smooth inequality constraints
- Fast linear iterations for distributed averaging
- Accelerated meta-algorithm for convex optimization problems
- Penalty-based method for decentralized optimization over time-varying graphs
- On the Convergence of Decentralized Gradient Descent
- Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization
- Fast Distributed Gradient Methods
- Revisiting EXTRA for Smooth Distributed Optimization
- Robust Stochastic Approximation Approach to Stochastic Programming
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Harnessing Smoothness to Accelerate Distributed Optimization
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Random Gradient Extrapolation for Distributed and Stochastic Optimization
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Gradient-Free Methods with Inexact Oracle for Convex-Concave Stochastic Saddle-Point Problem
- Distributed Subgradient Methods for Multi-Agent Optimization
- Katyusha: the first direct acceleration of stochastic gradient methods
- Decentralized Proximal Gradient Algorithms With Linear Convergence Rates
- Ergodicity of Continuous-Time Distributed Averaging Dynamics: A Spanning Directed Rooted Tree Approach
- Distributed Optimization Based on Gradient Tracking Revisited: Enhancing Convergence Rate via Surrogation
- An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization
- Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters
- Accelerated Distributed Nesterov Gradient Descent
- An Optimal Algorithm for Decentralized Finite-Sum Optimization
- Solving variational inequalities with Stochastic Mirror-Prox algorithm
- Optimal Algorithms for Non-Smooth Distributed Optimization in Networks
- A Smoothed Dual Approach for Variational Wasserstein Problems
- Derivative-free optimization methods
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Understanding Machine Learning
- A Stochastic Approximation Method
- A dual approach for optimal algorithms in distributed optimization over networks
- Inexact model: a framework for optimization and variational inequalities
- Distributed Zero-Order Algorithms for Nonconvex Multiagent Optimization
- Towards accelerated rates for distributed optimization over time-varying networks
- Near-Optimal Decentralized Algorithms for Saddle Point Problems over Time-Varying Networks
This page was built for publication: Recent theoretical advances in decentralized distributed convex optimization