Second-order stochastic optimization for machine learning in linear time
Author(s): Agarwal, N; Bullins, B; Hazan, Elad
DownloadTo refer to this page use:
http://arks.princeton.edu/ark:/88435/pr13x1d
Abstract: | First-order stochastic methods are the state-of-the-art in large-scale machine learning optimization owing to efficient per-iteration complexity. Second-order methods, while able to provide faster convergence, have been much less explored due to the high cost of computing the second-order information. In this paper we develop second-order stochastic methods for optimization problems in machine learning that match the per-iteration cost of gradient based methods, and in certain settings improve upon the overall running time over popular first-order methods. Furthermore, our algorithm has the desirable property of being implementable in time linear in the sparsity of the input data |
Publication Date: | 1-Nov-2017 |
Electronic Publication Date: | 1-Nov-2017 |
Citation: | Agarwal, N, Bullins, B, Hazan, E. (2017). Second-order stochastic optimization for machine learning in linear time. Journal of Machine Learning Research, 18 (1 - 40 |
Pages: | 1 - 40 |
Type of Material: | Journal Article |
Journal/Proceeding Title: | Journal of Machine Learning Research |
Version: | Author's manuscript |
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.