Skip to main content

Efficient Full-Matrix Adaptive Regularization

Author(s): Agarwal, Naman; Bullins, Brian; Chen, Xinyi; Hazan, Elad; Singh, Karan; et al

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1dg1k
Abstract: Adaptive regularization methods pre-multiply a descent direction by a preconditioning matrix. Due to the large number of parameters of machine learning problems, full-matrix preconditioning methods are prohibitively expensive. We show how to modify full-matrix adaptive regularization in order to make it practical and effective. We also provide a novel theoretical analysis for adaptive regularization in non-convex optimization settings. The core of our algorithm, termed GGT, consists of the efficient computation of the inverse square root of a low-rank matrix. Our preliminary experiments show improved iteration-wise convergence rates across synthetic tasks and standard deep learning benchmarks, and that the more carefully-preconditioned steps sometimes lead to a better solution.
Publication Date: 2019
Citation: Agarwal, Naman, Brian Bullins, Xinyi Chen, Elad Hazan, Karan Singh, Cyril Zhang, and Yi Zhang. "Efficient Full-Matrix Adaptive Regularization." In Proceedings of the 36th International Conference on Machine Learning (2019): pp. 102-110.
ISSN: 2640-3498
Pages: 102-110
Type of Material: Conference Article
Journal/Proceeding Title: Proceedings of the 36th International Conference on Machine Learning
Version: Final published version. Article is made available in OAR by the publisher's permission or policy.



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.