Gradual Learning of Recurrent Neural Networks
From MaRDI portal
Publication:6290642
arXiv1708.08863MaRDI QIDQ6290642
Author name not available (Why is that?)
Publication date: 29 August 2017
Abstract: Recurrent Neural Networks (RNNs) achieve state-of-the-art results in many sequence-to-sequence modeling tasks. However, RNNs are difficult to train and tend to suffer from overfitting. Motivated by the Data Processing Inequality (DPI), we formulate the multi-layered network as a Markov chain, introducing a training method that comprises training the network gradually and using layer-wise gradient clipping. We found that applying our methods, combined with previously introduced regularization and optimization methods, resulted in improvements in state-of-the-art architectures operating in language modeling tasks.
Has companion code repository: https://github.com/zivaharoni/gradual-learning-rnn
This page was built for publication: Gradual Learning of Recurrent Neural Networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6290642)