SGDR: Stochastic Gradient Descent with Warm Restarts
From MaRDI portal
Publication:6276488
arXiv1608.03983MaRDI QIDQ6276488
Author name not available (Why is that?)
Publication date: 13 August 2016
Abstract: Restart techniques are common in gradient-free optimization to deal with multimodal functions. Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at https://github.com/loshchil/SGDR
Has companion code repository: https://github.com/1hb6s7t/SGDR-ms
This page was built for publication: SGDR: Stochastic Gradient Descent with Warm Restarts
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6276488)