Random-reshuffled SARAH does not need full gradient computations
From MaRDI portal
Publication:6204201
DOI10.1007/s11590-023-02081-xarXiv2111.13322OpenAlexW3216032758MaRDI QIDQ6204201
Aleksandr Beznosikov, Martin Takáč
Publication date: 27 March 2024
Published in: Optimization Letters (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2111.13322
Cites Work
- Unnamed Item
- Unnamed Item
- Minimizing finite sums with the stochastic average gradient
- Pegasos: primal estimated sub-gradient solver for SVM
- User-friendly tail bounds for sums of random matrices
- Accelerating mini-batch SARAH by step size rules
- Linear convergence of cyclic SAGA
- Optimization for deep learning: an overview
- Parallel stochastic gradient algorithms for large-scale matrix completion
- Stochastic Learning Under Random Reshuffling With Constant Step-Sizes
- Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate
- Optimization Methods for Large-Scale Machine Learning
- Katyusha: the first direct acceleration of stochastic gradient methods
- Variance-Reduced Stochastic Learning Under Random Reshuffling
- New Convergence Aspects of Stochastic Gradient Algorithms
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- Understanding Machine Learning
- A Stochastic Approximation Method
- Inexact SARAH algorithm for stochastic optimization
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization