Shampoo: Preconditioned Stochastic Tensor Optimization
Author(s): Gupta, Vineet; Koren, Tomer; Singer, Yoram
DownloadTo refer to this page use:
http://arks.princeton.edu/ark:/88435/pr1t54j
Abstract: | Preconditioned gradient methods are among the most general and powerful tools in optimization. However, preconditioning requires storing and manipulating prohibitively large matrices. We describe and analyze a new structure-aware preconditioning algorithm, called Shampoo, for stochastic optimization over tensor spaces. Shampoo maintains a set of preconditioning matrices, each of which operates on a single dimension, contracting over the remaining dimensions. We establish convergence guarantees in the stochastic convex setting, the proof of which builds upon matrix trace inequalities. Our experiments with state-of-the-art deep learning models show that Shampoo is capable of converging considerably faster than commonly used optimizers. Surprisingly, although it involves a more complex update rule, Shampoo’s runtime per step is comparable in practice to that of simple gradient methods such as SGD, AdaGrad, and Adam. |
Publication Date: | 2018 |
Citation: | Gupta, Vineet, Tomer Koren, and Yoram Singer. "Shampoo: Preconditioned Stochastic Tensor Optimization." In Proceedings of the 35th International Conference on Machine Learning 80 (2018): pp. 1842-1850. |
ISSN: | 2640-3498 |
Pages: | 1842 - 1850 |
Type of Material: | Conference Article |
Journal/Proceeding Title: | Proceedings of the 35th International Conference on Machine Learning |
Version: | Final published version. Article is made available in OAR by the publisher's permission or policy. |
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.