Stochastic gradient descent with Polyak's learning rate
From MaRDI portal
Publication:1983178
DOI10.1007/s10915-021-01628-3zbMath1477.90105arXiv1903.08688OpenAlexW3196800830MaRDI QIDQ1983178
Mariana Prazeres, Adam M. Oberman
Publication date: 15 September 2021
Published in: Journal of Scientific Computing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1903.08688
Related Items (3)
Adaptive moment estimation for universal portfolio selection strategy ⋮ Unnamed Item ⋮ An adaptive Polyak heavy-ball method
Uses Software
Cites Work
- Unnamed Item
- Variable target value subgradient method
- Introductory lectures on convex optimization. A basic course.
- First-Order Methods in Optimization
- Optimization Methods for Large-Scale Machine Learning
- Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization
- Information-Based Complexity, Feedback and Dynamics in Convex Programming
- Some methods of speeding up the convergence of iteration methods
- A Stochastic Approximation Method
- Introduction to Continuous Optimization
- Laplacian Smoothing Stochastic Gradient Markov Chain Monte Carlo
This page was built for publication: Stochastic gradient descent with Polyak's learning rate