Deprecated: $wgMWOAuthSharedUserIDs=false is deprecated, set $wgMWOAuthSharedUserIDs=true, $wgMWOAuthSharedUserSource='local' instead [Called from MediaWiki\HookContainer\HookContainer::run in /var/www/html/w/includes/HookContainer/HookContainer.php at line 135] in /var/www/html/w/includes/Debug/MWDebug.php on line 372
Stochastic Training is Not Necessary for Generalization - MaRDI portal

Stochastic Training is Not Necessary for Generalization

From MaRDI portal
Publication:6378873

arXiv2109.14119MaRDI QIDQ6378873

Tom Goldstein, Michael Moeller, Jonas Geiping, Micah Goldblum, Phillip E. Pope

Publication date: 28 September 2021

Abstract: It is widely believed that the implicit regularization of SGD is fundamental to the impressive generalization behavior we observe in neural networks. In this work, we demonstrate that non-stochastic full-batch training can achieve comparably strong performance to SGD on CIFAR-10 using modern architectures. To this end, we show that the implicit regularization of SGD can be completely replaced with explicit regularization even when comparing against a strong and well-researched baseline. Our observations indicate that the perceived difficulty of full-batch training may be the result of its optimization properties and the disproportionate time and effort spent by the ML community tuning optimizers and hyperparameters for small-batch training.




Has companion code repository: https://github.com/jonasgeiping/fullbatchtraining








This page was built for publication: Stochastic Training is Not Necessary for Generalization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6378873)