Understanding the Role of Momentum in Stochastic Gradient Methods
From MaRDI portal
Publication:6328197
arXiv1910.13962MaRDI QIDQ6328197
Author name not available (Why is that?)
Publication date: 30 October 2019
Abstract: The use of momentum in stochastic gradient methods has become a widespread practice in machine learning. Different variants of momentum, including heavy-ball momentum, Nesterov's accelerated gradient (NAG), and quasi-hyperbolic momentum (QHM), have demonstrated success on various tasks. Despite these empirical successes, there is a lack of clear understanding of how the momentum parameters affect convergence and various performance measures of different algorithms. In this paper, we use the general formulation of QHM to give a unified analysis of several popular algorithms, covering their asymptotic convergence conditions, stability regions, and properties of their stationary distributions. In addition, by combining the results on convergence rates and stationary distributions, we obtain sometimes counter-intuitive practical guidelines for setting the learning rate and momentum parameters.
Has companion code repository: https://github.com/Kipok/understanding-momentum
This page was built for publication: Understanding the Role of Momentum in Stochastic Gradient Methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6328197)