Federated Multi-Task Learning under a Mixture of Distributions

From MaRDI portal
Publication:6375851

arXiv2108.10252MaRDI QIDQ6375851

Author name not available (Why is that?)

Publication date: 23 August 2021

Abstract: The increasing size of data generated by smartphones and IoT devices motivated the development of Federated Learning (FL), a framework for on-device collaborative training of machine learning models. First efforts in FL focused on learning a single global model with good average performance across clients, but the global model may be arbitrarily bad for a given client, due to the inherent heterogeneity of local data distributions. Federated multi-task learning (MTL) approaches can learn personalized models by formulating an opportune penalized optimization problem. The penalization term can capture complex relations among personalized models, but eschews clear statistical assumptions about local data distributions. In this work, we propose to study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions. This assumption encompasses most of the existing personalized FL approaches and leads to federated EM-like algorithms for both client-server and fully decentralized settings. Moreover, it provides a principled way to serve personalized models to clients not seen at training time. The algorithms' convergence is analyzed through a novel federated surrogate optimization framework, which can be of general interest. Experimental results on FL benchmarks show that our approach provides models with higher accuracy and fairness than state-of-the-art methods.




Has companion code repository: https://github.com/KarhouTam/FL-bench








This page was built for publication: Federated Multi-Task Learning under a Mixture of Distributions

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6375851)