Contracting Implicit Recurrent Neural Networks: Stable Models with Improved Trainability
From MaRDI portal
Publication:6331606
arXiv1912.10402MaRDI QIDQ6331606
Author name not available (Why is that?)
Publication date: 22 December 2019
Abstract: Stability of recurrent models is closely linked with trainability, generalizability and in some applications, safety. Methods that train stable recurrent neural networks, however, do so at a significant cost to expressibility. We propose an implicit model structure that allows for a convex parametrization of stable models using contraction analysis of non-linear systems. Using these stability conditions we propose a new approach to model initialization and then provide a number of empirical results comparing the performance of our proposed model set to previous stable RNNs and vanilla RNNs. By carefully controlling stability in the model, we observe a significant increase in the speed of training and model performance.
Has companion code repository: https://github.com/imanchester/ci-rnn
This page was built for publication: Contracting Implicit Recurrent Neural Networks: Stable Models with Improved Trainability
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6331606)