Stochastic Gradient Coding for Straggler Mitigation in Distributed Learning
From MaRDI portal
Publication:6318680
arXiv1905.05383MaRDI QIDQ6318680
Author name not available (Why is that?)
Publication date: 14 May 2019
Abstract: We consider distributed gradient descent in the presence of stragglers. Recent work on em gradient coding em and em approximate gradient coding em have shown how to add redundancy in distributed gradient descent to guarantee convergence even if some workers are em stragglersem---that is, slow or non-responsive. In this work we propose an approximate gradient coding scheme called em Stochastic Gradient Coding em (SGC), which works when the stragglers are random. SGC distributes data points redundantly to workers according to a pair-wise balanced design, and then simply ignores the stragglers. We prove that the convergence rate of SGC mirrors that of batched Stochastic Gradient Descent (SGD) for the loss function, and show how the convergence rate can improve with the redundancy. We also provide bounds for more general convex loss functions. We show empirically that SGC requires a small amount of redundancy to handle a large number of stragglers and that it can outperform existing approximate gradient codes when the number of stragglers is large.
Has companion code repository: https://github.com/RawadB01/SGC
This page was built for publication: Stochastic Gradient Coding for Straggler Mitigation in Distributed Learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6318680)