Optimal algorithms for smooth and strongly convex distributed optimization in networks

From MaRDI portal
Publication:6283721

arXiv1702.08704MaRDI QIDQ6283721

Author name not available (Why is that?)

Publication date: 28 February 2017

Abstract: In this paper, we determine the optimal convergence rates for strongly convex and smooth distributed optimization in two settings: centralized and decentralized communications over a network. For centralized (i.e. master/slave) algorithms, we show that distributing Nesterov's accelerated gradient descent is optimal and achieves a precision varepsilon>0 in time O(sqrtkappag(1+Deltaau)ln(1/varepsilon)), where kappag is the condition number of the (global) function to optimize, Delta is the diameter of the network, and au (resp. 1) is the time needed to communicate values between two neighbors (resp. perform local computations). For decentralized algorithms based on gossip, we provide the first optimal algorithm, called the multi-step dual accelerated (MSDA) method, that achieves a precision varepsilon>0 in time O(sqrtkappal(1+fracausqrtgamma)ln(1/varepsilon)), where kappal is the condition number of the local functions and gamma is the (normalized) eigengap of the gossip matrix used for communication between nodes. We then verify the efficiency of MSDA against state-of-the-art methods for two problems: least-squares regression and classification by logistic regression.




Has companion code repository: https://github.com/adelnabli/dadao








This page was built for publication: Optimal algorithms for smooth and strongly convex distributed optimization in networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6283721)