Convex risk minimization and conditional probability estimation
Author(s): Telgarsky, M; Dudík, M; Schapire, Robert
DownloadTo refer to this page use:
http://arks.princeton.edu/ark:/88435/pr1524v
Abstract: | © 2015 M. Telgarsky, M. Dudík & R. Schapire. This paper proves, in very general settings, that convex risk minimization is a procedure to select a unique conditional probability model determined by the classification problem. Unlike most previous work, we give results that are general enough to include cases in which no minimum exists, as occurs typically, for instance, with standard boosting algorithms. Concretely, we first show that any sequence of predictors minimizing convex risk over the source distribution will converge to this unique model when the class of predictors is linear (but potentially of infinite dimension). Secondly, we show the same result holds for empirical risk minimization whenever this class of predictors is finite dimensional, where the essential technical contribution is a norm-free generalization bound. |
Publication Date: | 1-Jan-2015 |
Citation: | Telgarsky, M, Dudík, M, Schapire, R. (2015). Convex risk minimization and conditional probability estimation. Journal of Machine Learning Research, 40 (2015 |
ISSN: | 1532-4435 |
EISSN: | 1533-7928 |
Type of Material: | Conference Article |
Journal/Proceeding Title: | Journal of Machine Learning Research |
Version: | Final published version. This is an open access article. |
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.