IntSGD: Adaptive Floatless Compression of Stochastic Gradients

From MaRDI portal
Publication:6360807

arXiv2102.08374MaRDI QIDQ6360807

Author name not available (Why is that?)

Publication date: 16 February 2021

Abstract: We propose a family of adaptive integer compression operators for distributed Stochastic Gradient Descent (SGD) that do not communicate a single float. This is achieved by multiplying floating-point vectors with a number known to every device and then rounding to integers. In contrast to the prior work on integer compression for SwitchML by Sapio et al. (2021), our IntSGD method is provably convergent and computationally cheaper as it estimates the scaling of vectors adaptively. Our theory shows that the iteration complexity of IntSGD matches that of SGD up to constant factors for both convex and non-convex, smooth and non-smooth functions, with and without overparameterization. Moreover, our algorithm can also be tailored for the popular all-reduce primitive and shows promising empirical performance.




Has companion code repository: https://github.com/bokunwang1/intsgd








This page was built for publication: IntSGD: Adaptive Floatless Compression of Stochastic Gradients

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6360807)