Convergence of stochastic gradient descent schemes for Lojasiewicz-landscapes
From MaRDI portal
Publication:6504373
arXiv2102.09385MaRDI QIDQ6504373
Author name not available (Why is that?)
Abstract: In this article, we consider convergence of stochastic gradient descent schemes (SGD) under weak assumptions on the underlying landscape. More explicitly, we show that on the event that the SGD stays local we have convergence of the SGD if there is only a countable number of critical points or if the target function/landscape satisfies Lojasiewicz-inequalities around all critical levels as all analytic functions do. In particular, we show that for neural networks with analytic activation function such as softplus, sigmoid and the hyperbolic tangent, SGD converges on the event of staying local, if the random variables modeling the signal and response in the training are compactly supported.
No records found.
This page was built for publication: Convergence of stochastic gradient descent schemes for Lojasiewicz-landscapes
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6504373)