Continuous-in-Depth Neural Networks

From MaRDI portal
Publication:6346575

arXiv2008.02389MaRDI QIDQ6346575

Author name not available (Why is that?)

Publication date: 5 August 2020

Abstract: Recent work has attempted to interpret residual networks (ResNets) as one step of a forward Euler discretization of an ordinary differential equation, focusing mainly on syntactic algebraic similarities between the two systems. Discrete dynamical integrators of continuous dynamical systems, however, have a much richer structure. We first show that ResNets fail to be meaningful dynamical integrators in this richer sense. We then demonstrate that neural network models can learn to represent continuous dynamical systems, with this richer structure and properties, by embedding them into higher-order numerical integration schemes, such as the Runge Kutta schemes. Based on these insights, we introduce ContinuousNet as a continuous-in-depth generalization of ResNet architectures. ContinuousNets exhibit an invariance to the particular computational graph manifestation. That is, the continuous-in-depth model can be evaluated with different discrete time step sizes, which changes the number of layers, and different numerical integration schemes, which changes the graph connectivity. We show that this can be used to develop an incremental-in-depth training scheme that improves model quality, while significantly decreasing training time. We also show that, once trained, the number of units in the computational graph can even be decreased, for faster inference with little-to-no accuracy drop.




Has companion code repository: https://github.com/afqueiruga/ContinuousNet








This page was built for publication: Continuous-in-Depth Neural Networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6346575)