Dual-Free Stochastic Decentralized Optimization with Variance Reduction
From MaRDI portal
Publication:6343733
arXiv2006.14384MaRDI QIDQ6343733
Author name not available (Why is that?)
Publication date: 25 June 2020
Abstract: We consider the problem of training machine learning models on distributed data in a decentralized way. For finite-sum problems, fast single-machine algorithms for large datasets rely on stochastic updates combined with variance reduction. Yet, existing decentralized stochastic algorithms either do not obtain the full speedup allowed by stochastic updates, or require oracles that are more expensive than regular gradients. In this work, we introduce a Decentralized stochastic algorithm with Variance Reduction called DVR. DVR only requires computing stochastic gradients of the local functions, and is computationally as fast as a standard stochastic variance-reduced algorithms run on a fraction of the dataset, where is the number of nodes. To derive DVR, we use Bregman coordinate descent on a well-chosen dual problem, and obtain a dual-free algorithm using a specific Bregman divergence. We give an accelerated version of DVR based on the Catalyst framework, and illustrate its effectiveness with simulations on real data.
Has companion code repository: https://github.com/HadrienHx/DVR_NeurIPS
This page was built for publication: Dual-Free Stochastic Decentralized Optimization with Variance Reduction
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6343733)