scientific article; zbMATH DE number 7626722
From MaRDI portal
Publication:5053196
Dzung T. Phan, Lam M. Nguyen, Marten van Dijk, Phuong Ha Nguyen, Quoc Tran Dinh
Publication date: 6 December 2022
Full work available at URL: https://arxiv.org/abs/2002.08246
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
sampling without replacementstochastic gradient algorithmstrongly convex minimizationnon-convex finite-sum minimizationshuffling-type gradient scheme
Related Items
Adaptive step size rules for stochastic optimization in large-scale learning, Convergence of Random Reshuffling under the Kurdyka–Łojasiewicz Inequality, Random-reshuffled SARAH does not need full gradient computations
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Introductory lectures on convex optimization. A basic course.
- A hybrid stochastic optimization framework for composite nonconvex optimization
- Why random reshuffling beats stochastic gradient descent
- Cubic regularization of Newton method and its global performance
- Incremental Subgradient Methods for Nondifferentiable Optimization
- Robust Stochastic Approximation Approach to Stochastic Programming
- Acceleration of Stochastic Approximation by Averaging
- Optimization Methods for Large-Scale Machine Learning
- Variance-Reduced Stochastic Learning Under Random Reshuffling
- Convergence Rate of Incremental Gradient and Incremental Newton Methods
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Some methods of speeding up the convergence of iteration methods
- A Stochastic Approximation Method