Skip to main content

Online Improper Learning with an Approximation Oracle

Author(s): Hazan, Elad; Hu, Wei; Li, Yuanzhi; Li, Zhiyuan

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1d55t
Abstract: We study the following question: given an efficient approximation algorithm for an optimization problem, can we learn efficiently in the same setting? We give a formal affirmative answer to this question in the form of a reduction from online learning to offline approximate optimization using an efficient algorithm that guarantees near optimal regret. The algorithm is efficient in terms of the number of oracle calls to a given approximation oracle – it makes only logarithmically many such calls per iteration. This resolves an open question by Kalai and Vempala, and by Garber. Furthermore, our result applies to the more general improper learning problems.
Publication Date: 2018
Citation: Hazan, Elad, Wei Hu, Yuanzhi Li, and Zhiyuan Li. "Online improper learning with an approximation oracle." In Advances in Neural Information Processing Systems 31 (2018).
ISSN: 1049-5258
Type of Material: Conference Article
Journal/Proceeding Title: Advances in Neural Information Processing Systems
Version: Final published version. Article is made available in OAR by the publisher's permission or policy.



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.