SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient
From MaRDI portal
Publication:6283739
arXiv1703.00102MaRDI QIDQ6283739
Author name not available (Why is that?)
Publication date: 28 February 2017
Abstract: In this paper, we propose a StochAstic Recursive grAdient algoritHm (SARAH), as well as its practical variant SARAH+, as a novel approach to the finite-sum minimization problems. Different from the vanilla SGD and other modern stochastic methods such as SVRG, S2GD, SAG and SAGA, SARAH admits a simple recursive framework for updating stochastic gradient estimates; when comparing to SAG/SAGA, SARAH does not require a storage of past gradients. The linear convergence rate of SARAH is proven under strong convexity assumption. We also prove a linear convergence rate (in the strongly convex case) for an inner loop of SARAH, the property that SVRG does not possess. Numerical experiments demonstrate the efficiency of our algorithm.
This page was built for publication: SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6283739)