Optimized convergence of stochastic gradient descent by weighted averaging
From MaRDI portal
Publication:6641001
DOI10.1080/10556788.2024.2306383WikidataQ128684763 ScholiaQ128684763MaRDI QIDQ6641001
Florian Jarre, Melinda Hagedorn
Publication date: 20 November 2024
Published in: Optimization Methods \& Software (Search for Journal in Brave)
Cites Work
- Title not available (Why is that?)
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods
- On efficiently combining limited-memory and trust-region techniques
- Acceleration of Stochastic Approximation by Averaging
- On Projected Stochastic Gradient Descent Algorithm with Weighted Averaging for Least Squares Regression
- Some methods of speeding up the convergence of iteration methods
- Stochastic gradient boosting.
This page was built for publication: Optimized convergence of stochastic gradient descent by weighted averaging
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6641001)