Optimizing the Communication-Accuracy Trade-off in Federated Learning with Rate-Distortion Theory

From MaRDI portal
Publication:6387741

arXiv2201.02664MaRDI QIDQ6387741

Author name not available (Why is that?)

Publication date: 7 January 2022

Abstract: A significant bottleneck in federated learning (FL) is the network communication cost of sending model updates from client devices to the central server. We present a comprehensive empirical study of the statistics of model updates in FL, as well as the role and benefits of various compression techniques. Motivated by these observations, we propose a novel method to reduce the average communication cost, which is near-optimal in many use cases, and outperforms Top-K, DRIVE, 3LC and QSGD on Stack Overflow next-word prediction, a realistic and challenging FL benchmark. This is achieved by examining the problem using rate-distortion theory, and proposing distortion as a reliable proxy for model accuracy. Distortion can be more effectively used for optimizing the trade-off between model performance and communication cost across clients. We demonstrate empirically that in spite of the non-i.i.d. nature of federated learning, the rate-distortion frontier is consistent across datasets, optimizers, clients and training rounds.




Has companion code repository: https://github.com/google-research/federated/tree/1b31b84/compressed_communication








This page was built for publication: Optimizing the Communication-Accuracy Trade-off in Federated Learning with Rate-Distortion Theory

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6387741)