Asynchronous stochastic convex optimization

From MaRDI portal
Publication:6264214

arXiv1508.00882MaRDI QIDQ6264214

Author name not available (Why is that?)

Publication date: 4 August 2015

Abstract: We show that asymptotically, completely asynchronous stochastic gradient procedures achieve optimal (even to constant factors) convergence rates for the solution of convex optimization problems under nearly the same conditions required for asymptotic optimality of standard stochastic gradient procedures. Roughly, the noise inherent to the stochastic approximation scheme dominates any noise from asynchrony. We also give empirical evidence demonstrating the strong performance of asynchronous, parallel stochastic optimization schemes, demonstrating that the robustness inherent to stochastic approximation problems allows substantially faster parallel and asynchronous solution methods.




Has companion code repository: https://worksheets.codalab.org/worksheets/0x610bcdb722bf48d3b537a65edf0fe72d








This page was built for publication: Asynchronous stochastic convex optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6264214)