Shampoo: Preconditioned Stochastic Tensor Optimization
From MaRDI portal
Publication:6298261
arXiv1802.09568MaRDI QIDQ6298261
Author name not available (Why is that?)
Publication date: 26 February 2018
Abstract: Preconditioned gradient methods are among the most general and powerful tools in optimization. However, preconditioning requires storing and manipulating prohibitively large matrices. We describe and analyze a new structure-aware preconditioning algorithm, called Shampoo, for stochastic optimization over tensor spaces. Shampoo maintains a set of preconditioning matrices, each of which operates on a single dimension, contracting over the remaining dimensions. We establish convergence guarantees in the stochastic convex setting, the proof of which builds upon matrix trace inequalities. Our experiments with state-of-the-art deep learning models show that Shampoo is capable of converging considerably faster than commonly used optimizers. Although it involves a more complex update rule, Shampoo's runtime per step is comparable to that of simple gradient methods such as SGD, AdaGrad, and Adam.
Has companion code repository: https://github.com/Daniil-Selikhanovych/Shampoo_optimizer
This page was built for publication: Shampoo: Preconditioned Stochastic Tensor Optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6298261)