Large-time asymptotics in deep learning
From MaRDI portal
Publication:6346591
arXiv2008.02491MaRDI QIDQ6346591
Author name not available (Why is that?)
Publication date: 6 August 2020
Abstract: We consider the neural ODE perspective of supervised learning and study the impact of the final time (which may indicate the depth of a corresponding ResNet) in training. For the classical --regularized empirical risk minimization problem, whenever the neural ODE dynamics are homogeneous with respect to the parameters, we show that the training error is at most of the order . Furthermore, if the loss inducing the empirical risk attains its minimum, the optimal parameters converge to minimal --norm parameters which interpolate the dataset. By a natural scaling between and the regularization hyperparameter we obtain the same results when and is fixed. This allows us to stipulate generalization properties in the overparametrized regime, now seen from the large depth, neural ODE perspective. To enhance the polynomial decay, inspired by turnpike theory in optimal control, we propose a learning problem with an additional integral regularization term of the neural ODE trajectory over . In the setting of --distance losses, we prove that both the training error and the optimal parameters are at most of the order in any . The aforementioned stability estimates are also shown for continuous space-time neural networks, taking the form of nonlinear integro-differential equations. By using a time-dependent moving grid for discretizing the spatial variable, we demonstrate that these equations provide a framework for addressing ResNets with variable widths.
Has companion code repository: https://github.com/borjanG/dynamical.systems
No records found.
This page was built for publication: Large-time asymptotics in deep learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6346591)