Training Deep Neural Networks with Adaptive Momentum Inspired by the Quadratic Optimization
From MaRDI portal
Publication:6380616
arXiv2110.09057MaRDI QIDQ6380616
Author name not available (Why is that?)
Publication date: 18 October 2021
Abstract: Heavy ball momentum is crucial in accelerating (stochastic) gradient-based optimization algorithms for machine learning. Existing heavy ball momentum is usually weighted by a uniform hyperparameter, which relies on excessive tuning. Moreover, the calibrated fixed hyperparameter may not lead to optimal performance. In this paper, to eliminate the effort for tuning the momentum-related hyperparameter, we propose a new adaptive momentum inspired by the optimal choice of the heavy ball momentum for quadratic optimization. Our proposed adaptive heavy ball momentum can improve stochastic gradient descent (SGD) and Adam. SGD and Adam with the newly designed adaptive momentum are more robust to large learning rates, converge faster, and generalize better than the baselines. We verify the efficiency of SGD and Adam with the new adaptive momentum on extensive machine learning benchmarks, including image classification, language modeling, and machine translation. Finally, we provide convergence guarantees for SGD and Adam with the proposed adaptive momentum.
Has companion code repository: https://github.com/kentaroy47/vision-transformers-cifar10
This page was built for publication: Training Deep Neural Networks with Adaptive Momentum Inspired by the Quadratic Optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6380616)