An Adaptive Remote Stochastic Gradient Method for Training Neural Networks
From MaRDI portal
Publication:6318195
arXiv1905.01422MaRDI QIDQ6318195
Author name not available (Why is that?)
Publication date: 3 May 2019
Abstract: We present the remote stochastic gradient (RSG) method, which computes the gradients at configurable remote observation points, in order to improve the convergence rate and suppress gradient noise at the same time for different curvatures. RSG is further combined with adaptive methods to construct ARSG for acceleration. The method is efficient in computation and memory, and is straightforward to implement. We analyze the convergence properties by modeling the training process as a dynamic system, which provides a guideline to select the configurable observation factor without grid search. ARSG yields convergence rate in non-convex settings, that can be further improved to in strongly convex settings. Numerical experiments demonstrate that ARSG achieves both faster convergence and better generalization, compared with popular adaptive methods, such as ADAM, NADAM, AMSGRAD, and RANGER for the tested problems. In particular, for training ResNet-50 on ImageNet, ARSG outperforms ADAM in convergence speed and meanwhile it surpasses SGD in generalization.
Has companion code repository: https://github.com/rationalspark/NAMSG
This page was built for publication: An Adaptive Remote Stochastic Gradient Method for Training Neural Networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6318195)