Deprecated: $wgMWOAuthSharedUserIDs=false is deprecated, set $wgMWOAuthSharedUserIDs=true, $wgMWOAuthSharedUserSource='local' instead [Called from MediaWiki\HookContainer\HookContainer::run in /var/www/html/w/includes/HookContainer/HookContainer.php at line 135] in /var/www/html/w/includes/Debug/MWDebug.php on line 372
On the Convergence Analysis of Aggregated Heavy-Ball Method - MaRDI portal

On the Convergence Analysis of Aggregated Heavy-Ball Method

From MaRDI portal
Publication:6392812

DOI10.1007/978-3-031-09607-5_1arXiv2203.02396MaRDI QIDQ6392812

Marina Danilova

Publication date: 4 March 2022

Abstract: Momentum first-order optimization methods are the workhorses in various optimization tasks, e.g., in the training of deep neural networks. Recently, Lucas et al. (2019) proposed a method called Aggregated Heavy-Ball (AggHB) that uses multiple momentum vectors corresponding to different momentum parameters and averages these vectors to compute the update direction at each iteration. Lucas et al. (2019) show that AggHB is more stable than the classical Heavy-Ball method even with large momentum parameters and performs well in practice. However, the method was analyzed only for quadratic objectives and for online optimization tasks under uniformly bounded gradients assumption, which is not satisfied for many practically important problems. In this work, we address this issue and propose the first analysis of AggHB for smooth objective functions in non-convex, convex, and strongly convex cases without additional restrictive assumptions. Our complexity results match the best-known ones for the Heavy-Ball method. We also illustrate the efficiency of AggHB numerically on several non-convex and convex problems.












This page was built for publication: On the Convergence Analysis of Aggregated Heavy-Ball Method

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6392812)