Federated Composite Optimization
From MaRDI portal
Publication:6353923
arXiv2011.08474MaRDI QIDQ6353923
Sashank Reddi, Manzil Zaheer, Honglin Yuan
Publication date: 17 November 2020
Abstract: Federated Learning (FL) is a distributed learning paradigm that scales on-device learning collaboratively and privately. Standard FL algorithms such as FedAvg are primarily geared towards smooth unconstrained settings. In this paper, we study the Federated Composite Optimization (FCO) problem, in which the loss function contains a non-smooth regularizer. Such problems arise naturally in FL applications that involve sparsity, low-rank, monotonicity, or more general constraints. We first show that straightforward extensions of primal algorithms such as FedAvg are not well-suited for FCO since they suffer from the "curse of primal averaging," resulting in poor convergence. As a solution, we propose a new primal-dual algorithm, Federated Dual Averaging (FedDualAvg), which by employing a novel server dual averaging procedure circumvents the curse of primal averaging. Our theoretical analysis and empirical experiments demonstrate that FedDualAvg outperforms the other baselines.
Has companion code repository: https://github.com/hongliny/FCO-ICML21
This page was built for publication: Federated Composite Optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6353923)