To refer to this page use:
|We consider the decision-making framework of online convex optimization with a very large number of experts. This setting is ubiquitous in contextual and reinforcement learning problems, where the size of the policy class renders enumeration and search within the policy class infeasible. Instead, we consider generalizing the methodology of online boosting. We define a weak learning algorithm as a mechanism that guarantees multiplicatively approximate regret against a base class of experts. In this access model, we give an efficient boosting algorithm that guarantees near-optimal regret against the convex hull of the base class. We consider both full and partial (a.k.a. bandit) information feedback models. We also give an analogous efficient boosting algorithm for the i.i.d. statistical setting. Our results simultaneously generalize online boosting and gradient boosting guarantees to contextual learning model, online convex optimization and bandit linear optimization settings.
|Hazan, Elad and Singh, Karan. "Boosting for Online Convex Optimization." Proceedings of the 38th International Conference on Machine Learning 139 (2021): 4140-4149.
|4140 - 4149
|Type of Material:
|Proceedings of the 38th International Conference on Machine Learning
|Final published version. This is an open access article.
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.