On the Convergence of AdaBound and its Connection to SGD
From MaRDI portal
Publication:6323575
arXiv1908.04457MaRDI QIDQ6323575
Author name not available (Why is that?)
Publication date: 12 August 2019
Abstract: Adaptive gradient methods such as Adam have gained extreme popularity due to their success in training complex neural networks and less sensitivity to hyperparameter tuning compared to SGD. However, it has been recently shown that Adam can fail to converge and might cause poor generalization -- this lead to the design of new, sophisticated adaptive methods which attempt to generalize well while being theoretically reliable. In this technical report we focus on AdaBound, a promising, recently proposed optimizer. We present a stochastic convex problem for which AdaBound can provably take arbitrarily long to converge in terms of a factor which is not accounted for in the convergence rate guarantee of Luo et al. (2019). We present a new regret guarantee under different assumptions on the bound functions, and provide empirical results on CIFAR suggesting that a specific form of momentum SGD can match AdaBound's performance while having less hyperparameters and lower computational costs.
Has companion code repository: https://github.com/mindspore-ai/contrib/blob/master/application/On-the-Convergence-of-AdaBound-and-its-Connection-to-SGD/csgd.py
This page was built for publication: On the Convergence of AdaBound and its Connection to SGD
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6323575)