Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
From MaRDI portal
Publication:6362163
arXiv2103.03239MaRDI QIDQ6362163
Author name not available (Why is that?)
Publication date: 4 March 2021
Abstract: Training deep neural networks on large datasets can often be accelerated by using multiple compute nodes. This approach, known as distributed training, can utilize hundreds of computers via specialized message-passing protocols such as Ring All-Reduce. However, running these protocols at scale requires reliable high-speed networking that is only available in dedicated clusters. In contrast, many real-world applications, such as federated learning and cloud-based distributed training, operate on unreliable devices with unstable network bandwidth. As a result, these applications are restricted to using parameter servers or gossip-based averaging protocols. In this work, we lift that restriction by proposing Moshpit All-Reduce - an iterative averaging protocol that exponentially converges to the global average. We demonstrate the efficiency of our protocol for distributed optimization with strong theoretical guarantees. The experiments show 1.3x speedup for ResNet-50 training on ImageNet compared to competitive gossip-based strategies and 1.5x speedup when training ALBERT-large from scratch using preemptible compute nodes.
Has companion code repository: https://github.com/learning-at-home/hivemind
This page was built for publication: Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6362163)