SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients
From MaRDI portal
Publication:6370375
arXiv2106.08208MaRDI QIDQ6370375
Heng Huang, Feihu Huang, Junyi Li
Publication date: 15 June 2021
Abstract: Adaptive gradient methods have shown excellent performances for solving many machine learning problems. Although multiple adaptive gradient methods were recently studied, they mainly focus on either empirical or theoretical aspects and also only work for specific problems by using some specific adaptive learning rates. Thus, it is desired to design a universal framework for practical algorithms of adaptive gradients with theoretical guarantee to solve general problems. To fill this gap, we propose a faster and universal framework of adaptive gradients (i.e., SUPER-ADAM) by introducing a universal adaptive matrix that includes most existing adaptive gradient forms. Moreover, our framework can flexibly integrate the momentum and variance reduced techniques. In particular, our novel framework provides the convergence analysis support for adaptive gradient methods under the nonconvex setting. In theoretical analysis, we prove that our SUPER-ADAM algorithm can achieve the best known gradient (i.e., stochastic first-order oracle (SFO)) complexity of for finding an -stationary point of nonconvex optimization, which matches the lower bound for stochastic smooth nonconvex optimization. In numerical experiments, we employ various deep learning tasks to validate that our algorithm consistently outperforms the existing adaptive algorithms. Code is available at https://github.com/LIJUNYI95/SuperAdam
Has companion code repository: https://github.com/lijunyi95/superadam
This page was built for publication: SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6370375)