LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning

From MaRDI portal
Publication:6302088

arXiv1805.09965MaRDI QIDQ6302088

Author name not available (Why is that?)

Publication date: 24 May 2018

Abstract: This paper presents a new class of gradient methods for distributed machine learning that adaptively skip the gradient calculations to learn with reduced communication and computation. Simple rules are designed to detect slowly-varying gradients and, therefore, trigger the reuse of outdated gradients. The resultant gradient-based algorithms are termed Lazily Aggregated Gradient --- justifying our acronym LAG used henceforth. Theoretically, the merits of this contribution are: i) the convergence rate is the same as batch gradient descent in strongly-convex, convex, and nonconvex smooth cases; and, ii) if the distributed datasets are heterogeneous (quantified by certain measurable constants), the communication rounds needed to achieve a targeted accuracy are reduced thanks to the adaptive reuse of lagged gradients. Numerical experiments on both synthetic and real data corroborate a significant communication reduction compared to alternatives.




Has companion code repository: https://github.com/chentianyi1991/LAG-code








This page was built for publication: LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6302088)