Gravity Optimizer: a Kinematic Approach on Optimization in Deep Learning

From MaRDI portal
Publication:6358818

arXiv2101.09192MaRDI QIDQ6358818

Author name not available (Why is that?)

Publication date: 22 January 2021

Abstract: We introduce Gravity, another algorithm for gradient-based optimization. In this paper, we explain how our novel idea change parameters to reduce the deep learning model's loss. It has three intuitive hyper-parameters that the best values for them are proposed. Also, we propose an alternative to moving average. To compare the performance of the Gravity optimizer with two common optimizers, Adam and RMSProp, five standard datasets were trained on two VGGNet models with a batch size of 128 for 100 epochs. Gravity hyper-parameters did not need to be tuned for different models. As will be explained more in the paper, to investigate the direct impact of the optimizer itself on loss reduction no overfitting prevention technique was used. The obtained results show that the Gravity optimizer has more stable performance than Adam and RMSProp and gives greater values of validation accuracy for datasets with more output classes like CIFAR-100 (Fine).




Has companion code repository: https://github.com/dariush-bahrami/gravity.optimizer








This page was built for publication: Gravity Optimizer: a Kinematic Approach on Optimization in Deep Learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6358818)