MomentumRNN: Integrating Momentum into Recurrent Neural Networks
From MaRDI portal
Publication:6342721
arXiv2006.06919MaRDI QIDQ6342721
Bao Wang, Andrea L. Bertozzi, Stanley J. Osher, Tan M. Nguyen, Richard G. Baraniuk
Publication date: 11 June 2020
Abstract: Designing deep neural networks is an art that often involves an expensive search over candidate architectures. To overcome this for recurrent neural nets (RNNs), we establish a connection between the hidden state dynamics in an RNN and gradient descent (GD). We then integrate momentum into this framework and propose a new family of RNNs, called {em MomentumRNNs}. We theoretically prove and numerically demonstrate that MomentumRNNs alleviate the vanishing gradient issue in training RNNs. We study the momentum long-short term memory (MomentumLSTM) and verify its advantages in convergence speed and accuracy over its LSTM counterpart across a variety of benchmarks. We also demonstrate that MomentumRNN is applicable to many types of recurrent cells, including those in the state-of-the-art orthogonal RNNs. Finally, we show that other advanced momentum-based optimization methods, such as Adam and Nesterov accelerated gradients with a restart, can be easily incorporated into the MomentumRNN framework for designing new recurrent cells with even better performance. The code is available at https://github.com/minhtannguyen/MomentumRNN.
Has companion code repository: https://github.com/lzj994/srnn_pytorch
This page was built for publication: MomentumRNN: Integrating Momentum into Recurrent Neural Networks