A linearly convergent stochastic recursive gradient method for convex optimization
From MaRDI portal
Publication:2228399
DOI10.1007/s11590-020-01550-xzbMath1459.90137OpenAlexW3008961984MaRDI QIDQ2228399
Publication date: 17 February 2021
Published in: Optimization Letters (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11590-020-01550-x
Related Items (3)
An online conjugate gradient algorithm for large-scale data analysis in machine learning ⋮ A mini-batch proximal stochastic recursive gradient algorithm with diagonal Barzilai-Borwein stepsize ⋮ Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization
Uses Software
Cites Work
- Minimizing finite sums with the stochastic average gradient
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- R-linear convergence of the Barzilai and Borwein gradient method
- Two-Point Step Size Gradient Methods
- Degenerate Nonlinear Programming with a Quadratic Growth Condition
- Non-asymptotic convergence analysis of inexact gradient methods for machine learning without strong convexity
- Optimization Methods for Large-Scale Machine Learning
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- A Stochastic Approximation Method
- SpiderBoost
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: A linearly convergent stochastic recursive gradient method for convex optimization