Accelerated and unaccelerated stochastic gradient descent in model generality
From MaRDI portal
Publication:2210408
DOI10.1134/S0001434620090230zbMath1452.90226OpenAlexW3094976944MaRDI QIDQ2210408
D. M. Dvinskikh, Alexander V. Gasnikov, C. C. Omel'chenko, A. I. Tyurin
Publication date: 6 November 2020
Published in: Mathematical Notes (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1134/s0001434620090230
Related Items
Vaidya's method for convex stochastic optimization problems in small dimension, Accelerated gradient methods with absolute and relative noise in the gradient, Three search algorithms for three nonconvex optimization problems, Unifying framework for accelerated randomized methods in convex optimization, Towards accelerated rates for distributed optimization over time-varying networks, Recent theoretical advances in decentralized distributed convex optimization, Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
Cites Work
- Gradient methods for minimizing composite functions
- Lectures on convex optimization
- New version of mirror prox for variational inequalities with adaptation to inexactness
- Gradient methods for problems with inexact model of the objective
- Fast gradient descent for convex minimization problems with an oracle producing a \(( \delta, L)\)-model of function at the requested point