Momentum via Primal Averaging: Theoretical Insights and Learning Rate Schedules for Non-Convex Optimization
From MaRDI portal
Publication:6350313
arXiv2010.00406MaRDI QIDQ6350313
Author name not available (Why is that?)
Publication date: 1 October 2020
Abstract: Momentum methods are now used pervasively within the machine learning community for training non-convex models such as deep neural networks. Empirically, they out perform traditional stochastic gradient descent (SGD) approaches. In this work we develop a Lyapunov analysis of SGD with momentum (SGD+M), by utilizing a equivalent rewriting of the method known as the stochastic primal averaging (SPA) form. This analysis is much tighter than previous theory in the non-convex case, and due to this we are able to give precise insights into when SGD+M may out-perform SGD, and what hyper-parameter schedules will work and why.
Has companion code repository: https://github.com/facebookresearch/madgrad
This page was built for publication: Momentum via Primal Averaging: Theoretical Insights and Learning Rate Schedules for Non-Convex Optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6350313)