Non-convex Finite-Sum Optimization Via SCSG Methods

From MaRDI portal
Publication:6288382

arXiv1706.09156MaRDI QIDQ6288382

Author name not available (Why is that?)

Publication date: 28 June 2017

Abstract: We develop a class of algorithms, as variants of the stochastically controlled stochastic gradient (SCSG) methods (Lei and Jordan, 2016), for the smooth non-convex finite-sum optimization problem. Assuming the smoothness of each component, the complexity of SCSG to reach a stationary point with mathbbE|ablaf(x)|2leepsilon is Oleft(minepsilon5/3,epsilon1n2/3ight), which strictly outperforms the stochastic gradient descent. Moreover, SCSG is never worse than the state-of-the-art methods based on variance reduction and it significantly outperforms them when the target accuracy is low. A similar acceleration is also achieved when the functions satisfy the Polyak-Lojasiewicz condition. Empirical experiments demonstrate that SCSG outperforms stochastic gradient methods on training multi-layers neural networks in terms of both training and validation loss.




Has companion code repository: https://github.com/SamuelHorvath/Variance_Reduced_Optimizers_Pytorch








This page was built for publication: Non-convex Finite-Sum Optimization Via SCSG Methods

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6288382)