Byzantine-Robust Decentralized Learning via ClippedGossip

From MaRDI portal
Publication:6390026

arXiv2202.01545MaRDI QIDQ6390026

Author name not available (Why is that?)

Publication date: 3 February 2022

Abstract: In this paper, we study the challenging task of Byzantine-robust decentralized training on arbitrary communication graphs. Unlike federated learning where workers communicate through a server, workers in the decentralized environment can only talk to their neighbors, making it harder to reach consensus and benefit from collaborative training. To address these issues, we propose a ClippedGossip algorithm for Byzantine-robust consensus and optimization, which is the first to provably converge to a O(deltamaxzeta2/gamma2) neighborhood of the stationary point for non-convex objectives under standard assumptions. Finally, we demonstrate the encouraging empirical performance of ClippedGossip under a large number of attacks.




Has companion code repository: https://github.com/epfml/byzantine-robust-decentralized-optimizer








This page was built for publication: Byzantine-Robust Decentralized Learning via ClippedGossip

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6390026)