Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
DOI10.1515/jiip-2020-0068zbMath1472.90087arXiv1904.09015OpenAlexW3124203093MaRDI QIDQ2042418
Alexander V. Gasnikov, Darina Dvinskikh
Publication date: 20 July 2021
Published in: Journal of Inverse and Ill-Posed Problems (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1904.09015
stochastic optimizationconvex optimizationfirst-order methodcomplexity boundsdecentralized optimizationmini-batchsum-type inverse problems
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Stochastic programming (90C15)
Related Items (9)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Primal-dual subgradient methods for convex problems
- Smooth minimization of non-smooth functions
- Optimized first-order methods for smooth convex minimization
- Gradient sliding for composite optimization
- Gradient methods for minimizing composite functions
- First-order methods of smooth convex optimization with inexact oracle
- Universal gradient methods for convex optimization problems
- Lectures on convex optimization
- Stochastic intermediate gradient method for convex problems with stochastic inexact oracle
- Parametric estimation. Finite sample theory
- Decomposition into functions in the minimization problem
- Dual approaches to the minimization of strongly convex functionals with a simple structure under affine constraints
- Universal method for stochastic composite optimization problems
- An optimal randomized incremental gradient method
- An accelerated directional derivative method for smooth stochastic convex optimization
- Gradient methods for problems with inexact model of the objective
- Accelerated and unaccelerated stochastic gradient descent in model generality
- Implementable tensor methods in unconstrained convex optimization
- Communication-efficient algorithms for decentralized and stochastic optimization
- Distributed optimization over networks
- Penalty-based method for decentralized optimization over time-varying graphs
- Lectures on Modern Convex Optimization
- Efficiency of the Accelerated Coordinate Descent Method on Structured Optimization Problems
- Fast Primal-Dual Gradient Method for Strongly Convex Minimization Problems with Linear Constraints
- Accuracy Certificates for Computational Problems with Convex Structure
- Revisiting EXTRA for Smooth Distributed Optimization
- Smooth Optimization with Approximate Gradient
- Lectures on Stochastic Programming
- Non-asymptotic confidence bounds for the optimal value of a stochastic program
- Parallel Algorithms and Probability of Large Deviation for Stochastic Convex Optimization Problems
- Random Gradient Extrapolation for Distributed and Stochastic Optimization
- Computational Methods for Inverse Problems
- Optimal Distributed Convex Optimization on Slowly Time-Varying Graphs
- Katyusha: the first direct acceleration of stochastic gradient methods
- On the rates of convergence of parallelized averaged stochastic gradient algorithms
- Gradient Descent Finds the Cubic-Regularized Nonconvex Newton Step
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
This page was built for publication: Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems