Skip to main content

Local smoothness in variance reduced optimization

Author(s): Vainsencher, Daniel; Liu, Han; Zhang, Tong

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr14z3z
Abstract: We propose a family of non-uniform sampling strategies to provably speed up a class of stochastic optimization algorithms with linear convergence including Stochastic Variance Reduced Gradient (SVRG) and Stochastic Dual Coordinate Ascent (SDCA). For a large family of penalized empirical risk minimization problems, our methods exploit data dependent local smoothness of the loss functions near the optimum, while maintaining convergence guarantees. Our bounds are the first to quantify the advantage gained from local smoothness which are significant for some problems significantly better. Empirically, we provide thorough numerical results to back up our theory. Additionally we present algorithms exploiting local smoothness in more aggressive ways, which perform even better in practice.
Publication Date: 1-Jan-2015
Citation: Vainsencher, D, Liu, H, Zhang, T. (2015). Local smoothness in variance reduced optimization. Advances in Neural Information Processing Systems, 2015-January (2179 - 2187).
ISSN: 1049-5258
Pages: 2179 - 2187
Type of Material: Journal Article
Journal/Proceeding Title: Advances in Neural Information Processing Systems
Version: Final published version. Article is made available in OAR by the publisher's permission or policy.



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.