Skip to main content

Variance reduction for faster non-convex optimization

Author(s): Allen-Zhu, Z; Hazan, Elad

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1cd4q
Abstract: We consider the fundamental problem in nonconvex optimization of efficiently reaching a stationary point. In contrast to the convex case, in the long history of this basic problem, the only known theoretical results on first-order nonconvex optimization remain to be full gradient descent that converges in 0(1/∈) iterations for smooth objectives, and stochastic gradient descent that converges in 0(1/∈2) iterations for objectives that are sum of smooth functions. We provide the first improvement in this line of research. Our result is based on the variance reduction trick recently introduced to convex optimization, as well as a brand new analysis of variance reduction that is suitable for non-convex optimization. For objectives that are sum of smooth functions, our first-order minibatch stochastic method converges with an 0(1/∈) rate, and is faster than full gradient descent by Ω(n1/3). We demonstrate the effectiveness of our methods on empirical risk minimizations with non-convex loss functions and training neural nets.
Publication Date: 2016
Electronic Publication Date: 2016
Citation: Allen-Zhu, Z, Hazan, E. (2016). Variance reduction for faster non-convex optimization. 2 (1093 - 1101
Pages: 1093 - 1101
Type of Material: Conference Article
Journal/Proceeding Title: 33rd International Conference on Machine Learning, ICML 2016
Version: Author's manuscript



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.