Rate-Distortion Theoretic Bounds on Generalization Error for Distributed Learning

From MaRDI portal
Publication:6401253

arXiv2206.02604MaRDI QIDQ6401253

Author name not available (Why is that?)

Publication date: 6 June 2022

Abstract: In this paper, we use tools from rate-distortion theory to establish new upper bounds on the generalization error of statistical distributed learning algorithms. Specifically, there are K clients whose individually chosen models are aggregated by a central server. The bounds depend on the compressibility of each client's algorithm while keeping other clients' algorithms un-compressed, and leverage the fact that small changes in each local model change the aggregated model by a factor of only 1/K. Adopting a recently proposed approach by Sefidgaran et al., and extending it suitably to the distributed setting, this enables smaller rate-distortion terms which are shown to translate into tighter generalization bounds. The bounds are then applied to the distributed support vector machines (SVM), suggesting that the generalization error of the distributed setting decays faster than that of the centralized one with a factor of mathcalO(log(K)/sqrtK). This finding is validated also experimentally. A similar conclusion is obtained for a multiple-round federated learning setup where each client uses stochastic gradient Langevin dynamics (SGLD).




Has companion code repository: https://github.com/romainchor/datascience








This page was built for publication: Rate-Distortion Theoretic Bounds on Generalization Error for Distributed Learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6401253)