Optimizing Neural Networks via Koopman Operator Theory

From MaRDI portal
Publication:6342025

arXiv2006.02361MaRDI QIDQ6342025

Author name not available (Why is that?)

Publication date: 3 June 2020

Abstract: Koopman operator theory, a powerful framework for discovering the underlying dynamics of nonlinear dynamical systems, was recently shown to be intimately connected with neural network training. In this work, we take the first steps in making use of this connection. As Koopman operator theory is a linear theory, a successful implementation of it in evolving network weights and biases offers the promise of accelerated training, especially in the context of deep networks, where optimization is inherently a non-convex problem. We show that Koopman operator theoretic methods allow for accurate predictions of weights and biases of feedforward, fully connected deep networks over a non-trivial range of training time. During this window, we find that our approach is >10x faster than various gradient descent based methods (e.g. Adam, Adadelta, Adagrad), in line with our complexity analysis. We end by highlighting open questions in this exciting intersection between dynamical systems and neural network theory. We highlight additional methods by which our results could be expanded to broader classes of networks and larger training intervals, which shall be the focus of future work.




Has companion code repository: https://github.com/william-redman/Koopman-Neural-Network-Training








This page was built for publication: Optimizing Neural Networks via Koopman Operator Theory

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6342025)