Deprecated: $wgMWOAuthSharedUserIDs=false is deprecated, set $wgMWOAuthSharedUserIDs=true, $wgMWOAuthSharedUserSource='local' instead [Called from MediaWiki\HookContainer\HookContainer::run in /var/www/html/w/includes/HookContainer/HookContainer.php at line 135] in /var/www/html/w/includes/Debug/MWDebug.php on line 372
On the Convergence of NEAR-DGD for Nonconvex Optimization with Second Order Guarantees - MaRDI portal

On the Convergence of NEAR-DGD for Nonconvex Optimization with Second Order Guarantees

From MaRDI portal
Publication:6363795

arXiv2103.14233MaRDI QIDQ6363795

Author name not available (Why is that?), Ermin Wei

Publication date: 25 March 2021

Abstract: We consider the setting where the nodes of an undirected, connected network collaborate to solve a shared objective modeled as the sum of smooth functions. We assume that each summand is privately known by a unique node. NEAR-DGD is a distributed first order method which permits adjusting the amount of communication between nodes relative to the amount of computation performed locally in order to balance convergence accuracy and total application cost. In this work, we generalize the convergence properties of a variant of NEAR-DGD from the strongly convex to the nonconvex case. Under mild assumptions, we show convergence to minimizers of a custom Lyapunov function. Moreover, we demonstrate that the gap between those minimizers and the second order stationary solutions of the original problem can become arbitrarily small depending on the choice of algorithm parameters. Finally, we accompany our theoretical analysis with a numerical experiment to evaluate the empirical performance of NEAR-DGD in the nonconvex setting.












This page was built for publication: On the Convergence of NEAR-DGD for Nonconvex Optimization with Second Order Guarantees

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6363795)