Deprecated: $wgMWOAuthSharedUserIDs=false is deprecated, set $wgMWOAuthSharedUserIDs=true, $wgMWOAuthSharedUserSource='local' instead [Called from MediaWiki\HookContainer\HookContainer::run in /var/www/html/w/includes/HookContainer/HookContainer.php at line 135] in /var/www/html/w/includes/Debug/MWDebug.php on line 372
adaQN: An Adaptive Quasi-Newton Algorithm for Training RNNs - MaRDI portal

adaQN: An Adaptive Quasi-Newton Algorithm for Training RNNs

From MaRDI portal
Publication:121139

DOI10.48550/ARXIV.1511.01169arXiv1511.01169MaRDI QIDQ121139

Nitish Shirish Keskar, Albert S. Berahas

Publication date: 4 November 2015

Abstract: Recurrent Neural Networks (RNNs) are powerful models that achieve exceptional performance on several pattern recognition problems. However, the training of RNNs is a computationally difficult task owing to the well-known "vanishing/exploding" gradient problem. Algorithms proposed for training RNNs either exploit no (or limited) curvature information and have cheap per-iteration complexity, or attempt to gain significant curvature information at the cost of increased per-iteration cost. The former set includes diagonally-scaled first-order methods such as ADAGRAD and ADAM, while the latter consists of second-order algorithms like Hessian-Free Newton and K-FAC. In this paper, we present adaQN, a stochastic quasi-Newton algorithm for training RNNs. Our approach retains a low per-iteration cost while allowing for non-diagonal scaling through a stochastic L-BFGS updating scheme. The method uses a novel L-BFGS scaling initialization scheme and is judicious in storing and retaining L-BFGS curvature pairs. We present numerical experiments on two language modeling tasks and show that adaQN is competitive with popular RNN training algorithms.







Related Items (1)






This page was built for publication: adaQN: An Adaptive Quasi-Newton Algorithm for Training RNNs

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q121139)