Skip to main content

Online gradient boosting

Author(s): Beygelzimer, A; Hazan, Elad; Kale, S; Luo, H

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1nm3b
Abstract: We extend the theory of boosting for regression problems to the online learning setting. Generalizing from the batch setting for boosting, the notion of a weak learning algorithm is modeled as an online learning algorithm with linear loss functions that competes with a base class of regression functions, while a strong learning algorithm is an online learning algorithm with smooth convex loss functions that competes with a larger class of regression functions. Our main result is an online gradient boosting algorithm that converts a weak online learning algorithm into a strong one where the larger class of functions is the linear span of the base class. We also give a simpler boosting algorithm that converts a weak online learning algorithm into a strong one where the larger class of functions is the convex hull of the base class, and prove its optimality.
Publication Date: 2015
Electronic Publication Date: 2015
Citation: Beygelzimer, A, Hazan, E, Kale, S, Luo, H. (2015). Online gradient boosting. 2015-January (2458 - 2466
Pages: 2458 - 2466
Type of Material: Conference Article
Journal/Proceeding Title: Advances in Neural Information Processing Systems
Version: Author's manuscript



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.