Ritz-like values in steplength selections for stochastic gradient methods
From MaRDI portal
Publication:2156893
DOI10.1007/s00500-020-05219-6zbMath1491.65042OpenAlexW3075093238MaRDI QIDQ2156893
Giorgia Franchini, Luca Zanni, Valeria Ruggiero
Publication date: 21 July 2022
Published in: Soft Computing (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s00500-020-05219-6
stochastic gradient methodsRitz-like valuesadaptive subsampling strategieslearning rate selection rulereduction variance techniques
Related Items
To the special issue dedicated to the 3rd international conference ``Numerical computations: theory and algorithms -- NUMTA 2019 June 15--21, 2019, Isola Capo Rizzuto, Italy, A line search based proximal stochastic gradient algorithm with dynamical variance reduction, Explainable bilevel optimization: an application to the Helsinki Deblur Challenge, Linesearch Newton-CG methods for convex optimization with noise
Cites Work
- A limited memory steepest descent method
- Sample size selection in optimization methods for machine learning
- Gradient methods with adaptive step-sizes
- New adaptive stepsize selections in gradient methods
- On the steplength selection in stochastic gradient methods
- On the steplength selection in gradient methods for unconstrained optimization
- Hybrid Deterministic-Stochastic Methods for Data Fitting
- Alternate minimization gradient method
- Adaptive Sampling Strategies for Stochastic Optimization
- Optimization Methods for Large-Scale Machine Learning
- Handling nonpositive curvature in a limited memory steepest descent method
- Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization
- An Introduction to Matrix Concentration Inequalities