DecentLaM: Decentralized Momentum SGD for Large-batch Deep Training
From MaRDI portal
Publication:6366030
arXiv2104.11981MaRDI QIDQ6366030
Author name not available (Why is that?)
Publication date: 24 April 2021
Abstract: The scale of deep learning nowadays calls for efficient distributed training algorithms. Decentralized momentum SGD (DmSGD), in which each node averages only with its neighbors, is more communication efficient than vanilla Parallel momentum SGD that incurs global average across all computing nodes. On the other hand, the large-batch training has been demonstrated critical to achieve runtime speedup. This motivates us to investigate how DmSGD performs in the large-batch scenario. In this work, we find the momentum term can amplify the inconsistency bias in DmSGD. Such bias becomes more evident as batch-size grows large and hence results in severe performance degradation. We next propose DecentLaM, a novel decentralized large-batch momentum SGD to remove the momentum-incurred bias. The convergence rate for both non-convex and strongly-convex scenarios is established. Our theoretical results justify the superiority of DecentLaM to DmSGD especially in the large-batch scenario. Experimental results on a variety of computer vision tasks and models demonstrate that DecentLaM promises both efficient and high-quality training.
Has companion code repository: https://github.com/Bluefog-Lib/bluefog
This page was built for publication: DecentLaM: Decentralized Momentum SGD for Large-batch Deep Training
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6366030)