Online Batch Selection for Faster Training of Neural Networks

From MaRDI portal
Publication:6267586

arXiv1511.06343MaRDI QIDQ6267586

Author name not available (Why is that?)

Publication date: 19 November 2015

Abstract: Deep neural networks are commonly trained using stochastic non-convex optimization procedures, which are driven by gradient information estimated on fractions (batches) of the dataset. While it is commonly accepted that batch size is an important parameter for offline tuning, the benefits of online selection of batches remain poorly understood. We investigate online batch selection strategies for two state-of-the-art methods of stochastic gradient-based optimization, AdaDelta and Adam. As the loss function to be minimized for the whole dataset is an aggregation of loss functions of individual datapoints, intuitively, datapoints with the greatest loss should be considered (selected in a batch) more frequently. However, the limitations of this intuition and the proper control of the selection pressure over time are open questions. We propose a simple strategy where all datapoints are ranked w.r.t. their latest known loss value and the probability to be selected decays exponentially as a function of rank. Our experimental results on the MNIST dataset suggest that selecting batches speeds up both AdaDelta and Adam by a factor of about 5.




Has companion code repository: https://github.com/Lasagne/Lasagne








This page was built for publication: Online Batch Selection for Faster Training of Neural Networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6267586)