Leveraging the two timescale regime to demonstrate convergence of neural networks

From MaRDI portal
Publication:6433605

arXiv2304.09576MaRDI QIDQ6433605

Author name not available (Why is that?)

Publication date: 19 April 2023

Abstract: We study the training dynamics of shallow neural networks, in a two-timescale regime in which the stepsizes for the inner layer are much smaller than those for the outer layer. In this regime, we prove convergence of the gradient flow to a global optimum of the non-convex optimization problem in a simple univariate setting. The number of neurons need not be asymptotically large for our result to hold, distinguishing our result from popular recent approaches such as the neural tangent kernel or mean-field regimes. Experimental illustration is provided, showing that the stochastic gradient descent behaves according to our description of the gradient flow and thus converges to a global optimum in the two-timescale regime, but can fail outside of this regime.




Has companion code repository: https://github.com/PierreMarion23/two-timescale-nn








This page was built for publication: Leveraging the two timescale regime to demonstrate convergence of neural networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6433605)