HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent
From MaRDI portal
Publication:6226289
arXiv1106.5730MaRDI QIDQ6226289
Author name not available (Why is that?)
Publication date: 28 June 2011
Abstract: Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performance-destroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented without any locking. We present an update scheme called HOGWILD! which allows processors access to shared memory with the possibility of overwriting each other's work. We show that when the associated optimization problem is sparse, meaning most gradient updates only modify small parts of the decision variable, then HOGWILD! achieves a nearly optimal rate of convergence. We demonstrate experimentally that HOGWILD! outperforms alternative schemes that use locking by an order of magnitude.
Has companion code repository: https://github.com/yongsheng268/sparkflow
This page was built for publication: HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6226289)