AEGD: Adaptive Gradient Descent with Energy
From MaRDI portal
Publication:6350986
arXiv2010.05109MaRDI QIDQ6350986
Author name not available (Why is that?)
Publication date: 10 October 2020
Abstract: We propose AEGD, a new algorithm for first-order gradient-based optimization of non-convex objective functions, based on a dynamically updated energy variable. The method is shown to be unconditionally energy stable, irrespective of the step size. We prove energy-dependent convergence rates of AEGD for both non-convex and convex objectives, which for a suitably small step size recovers desired convergence rates for the batch gradient descent. We also provide an energy-dependent bound on the stationary convergence of AEGD in the stochastic non-convex setting. The method is straightforward to implement and requires little tuning of hyper-parameters. Experimental results demonstrate that AEGD works well for a large variety of optimization problems: it is robust with respect to initial data, capable of making rapid initial progress. The stochastic AEGD shows comparable and often better generalization performance than SGD with momentum for deep neural networks.
Has companion code repository: https://github.com/txping/AEGD
No records found.
This page was built for publication: AEGD: Adaptive Gradient Descent with Energy
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6350986)