To refer to this page use:
|Abstract:||We consider the high-dimensional discriminant analysis problem. For this problem, different methods have been proposed and justified by establishing exact convergence rates for the classification risk, as well as the ℓ 2 convergence results to the discriminative rule. However, sharp theoretical analysis for the variable selection performance of these procedures have not been established, even though model interpretation is of fundamental importance in scientific data analysis. This paper bridges the gap by providing sharp sufficient conditions for consistent variable selection using the sparse discriminant analysis. Through careful analysis, we establish rates of convergence that are significantly faster than the best known results and admit an optimal scaling of the sample size n, dimensionality p, and sparsity level s in the high-dimensional setting. Sufficient conditions are complemented by the necessary information theoretic limits on the variable selection problem in the context of high-dimensional discriminant analysis. Exploiting a numerical equivalence result, our method also establish the optimal results for the ROAD estimator and the sparse optimal scoring estimator. Furthermore, we analyze an exhaustive search procedure, whose performance serves as a benchmark, and show that it is variable selection consistent under weaker conditions. Extensive simulations demonstrating the sharpness of the bounds are also provided.|
|Citation:||Kolar, Mladen, and Han Liu. "Optimal feature selection in high-dimensional discriminant analysis." IEEE transactions on information theory 61, no. 2 (2014): 1063-1083.|
|Pages:||1063 - 1083|
|Type of Material:||Journal Article|
|Journal/Proceeding Title:||IEEE Transactions on Information Theory|
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.