DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule
From MaRDI portal
Publication:6427408
arXiv2302.12022MaRDI QIDQ6427408
Author name not available (Why is that?)
Publication date: 8 February 2023
Abstract: We propose a tuning-free dynamic SGD step size formula, which we call Distance over Gradients (DoG). The DoG step sizes depend on simple empirical quantities (distance from the initial point and norms of gradients) and have no ``learning rate parameter. Theoretically, we show that a slight variation of the DoG formula enjoys strong parameter-free convergence guarantees for stochastic convex optimization assuming only emph{locally bounded} stochastic gradients. Empirically, we consider a broad range of vision and language transfer learning tasks, and show that DoG's performance is close to that of SGD with tuned learning rate. We also propose a per-layer variant of DoG that generally outperforms tuned SGD, approaching the performance of tuned Adam. A PyTorch implementation is available at https://github.com/formll/dog
Has companion code repository: https://github.com/formll/dog
This page was built for publication: DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6427408)