A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization
Author(s): Garber, Dan; Hazan, Elad
DownloadTo refer to this page use:
http://arks.princeton.edu/ark:/88435/pr11k0h
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Garber, Dan | - |
dc.contributor.author | Hazan, Elad | - |
dc.date.accessioned | 2021-10-08T19:48:36Z | - |
dc.date.available | 2021-10-08T19:48:36Z | - |
dc.date.issued | 2016 | en_US |
dc.identifier.citation | Garber, Dan, and Elad Hazan. "A linearly convergent variant of the conditional gradient algorithm under strong convexity, with applications to online and stochastic optimization." SIAM Journal on Optimization 26, no. 3 (2016): 1493-1528. doi:10.1137/140985366 | en_US |
dc.identifier.issn | 1052-6234 | - |
dc.identifier.uri | https://arxiv.org/pdf/1301.4666.pdf | - |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/pr11k0h | - |
dc.description.abstract | Linear optimization is many times algorithmically simpler than nonlinear convex optimization. Linear optimization over matroid polytopes, matching polytopes, and path polytopes are examples of problems for which we have simple and efficient combinatorial algorithms but whose nonlinear convex counterpart is harder and admits significantly less efficient algorithms. This motivates the computational model of convex optimization, including the offline, online, and stochastic settings, using a linear optimization oracle. In this computational model we give several new results that improve on the previous state of the art. Our main result is a novel conditional gradient algorithm for smooth and strongly convex optimization over polyhedral sets that performs only a single linear optimization step over the domain on each iteration and enjoys a linear convergence rate. This gives an exponential improvement in convergence rate over previous results. Based on this new conditional gradient algorithm we give the first algorithms for online convex optimization over polyhedral sets that perform only a single linear optimization step over the domain while having optimal regret guarantees, answering an open question of Kalai and Vempala and of Hazan and Kale. Our online algorithms also imply conditional gradient algorithms for nonsmooth and stochastic convex optimization with the same convergence rates as projected (sub)gradient methods. | en_US |
dc.format.extent | 1493 - 1528 | en_US |
dc.language.iso | en_US | en_US |
dc.relation.ispartof | SIAM Journal on Optimization | en_US |
dc.rights | Author's manuscript | en_US |
dc.title | A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization | en_US |
dc.type | Journal Article | en_US |
dc.identifier.doi | 10.1137/140985366 | - |
dc.identifier.doi | 1095-7189 | - |
pu.type.symplectic | http://www.symplectic.co.uk/publications/atom-terms/1.0/journal-article | en_US |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
LinearlyConvConditionalGradient.pdf | 394.07 kB | Adobe PDF | View/Download |
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.