Skip to main content

Neyman-Pearson classification, convexity and stochastic constraints

Author(s): Rigollet, Philippe; Tong, Xin

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr11v2c
Abstract: Motivated by problems of anomaly detection, this paper implements the Neyman-Pearson paradigm to deal with asymmetric errors in binary classification with a convex loss. Given a finite collection of classifiers, we combine them and obtain a new classifier that satisfies simultaneously the two following properties with high probability: (i) its probability of type I error is below a pre-specified level and (ii), it has probability of type II error close to the minimum possible. The proposed classifier is obtained by solving an optimization problem with an empirical objective and an empirical constraint. New techniques to handle such problems are developed and have consequences on chance constrained programming.
Publication Date: Oct-2011
Citation: Rigollet, P., & Tong, X. (2011). Neyman-pearson classification, convexity and stochastic constraints. Journal of Machine Learning Research, 12(Oct), 2831-2855. Retrieved from http://www.jmlr.org/papers/volume12/rigollet11a/rigollet11a.pdf
Pages: 2831 - 2855
Type of Material: Journal Article
Journal/Proceeding Title: Journal of Machine Learning Research
Version: Final published version. This is an open access article.



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.