Calibrating the Adaptive Learning Rate to Improve Convergence of ADAM
From MaRDI portal
Publication:6323068
arXiv1908.00700MaRDI QIDQ6323068
Author name not available (Why is that?)
Publication date: 2 August 2019
Abstract: Adaptive gradient methods (AGMs) have become popular in optimizing the nonconvex problems in deep learning area. We revisit AGMs and identify that the adaptive learning rate (A-LR) used by AGMs varies significantly across the dimensions of the problem over epochs (i.e., anisotropic scale), which may lead to issues in convergence and generalization. All existing modified AGMs actually represent efforts in revising the A-LR. Theoretically, we provide a new way to analyze the convergence of AGMs and prove that the convergence rate of extsc{Adam} also depends on its hyper-parameter , which has been overlooked previously. Based on these two facts, we propose a new AGM by calibrating the A-LR with an activation ({em softplus}) function, resulting in the extsc{Sadam} and extsc{SAMSGrad} methods footnote{Code is available at https://github.com/neilliang90/Sadam.git.}. We further prove that these algorithms enjoy better convergence speed under nonconvex, non-strongly convex, and Polyak-{L}ojasiewicz conditions compared with extsc{Adam}. Empirical studies support our observation of the anisotropic A-LR and show that the proposed methods outperform existing AGMs and generalize even better than S-Momentum in multiple deep learning tasks.
Has companion code repository: https://github.com/neilliang90/Sadam
This page was built for publication: Calibrating the Adaptive Learning Rate to Improve Convergence of ADAM
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6323068)