To refer to this page use:
http://arks.princeton.edu/ark:/88435/pr1587c
Abstract: | Heavy-tailed high-dimensional data are commonly encountered in various scientific fields and pose great challenges to modern statistical analysis. A natural procedure to address this problem is to use penalized quantile regression with weighted L1-penalty, called weighted robust Lasso (WR-Lasso), in which weights are introduced to ameliorate the bias problem induced by the L1-penalty. In the ultra-high dimensional setting, where the dimensionality can grow exponentially with the sample size, we investigate the model selection oracle property and establish the asymptotic normality of the WR-Lasso. We show that only mild conditions on the model error distribution are needed. Our theoretical results also reveal that adaptive choice of the weight vector is essential for the WR-Lasso to enjoy these nice asymptotic properties. To make the WR-Lasso practically feasible, we propose a two-step procedure, called adaptive robust Lasso (AR-Lasso), in which the weight vector in the second step is constructed based on the L1-penalized quantile regression estimate from the first step. This two-step procedure is justified theoretically to possess the oracle property and the asymptotic normality. Numerical studies demonstrate the favorable finite-sample performance of the AR-Lasso. |
Publication Date: | Feb-2014 |
Citation: | Fan, Jianqing, Fan, Yingying, Barut, Emre. (2014). Adaptive robust variable selection. The Annals of Statistics, 42 (1), 324 - 351. doi:10.1214/13-AOS1191 |
DOI: | doi:10.1214/13-AOS1191 |
ISSN: | 0090-5364 |
Pages: | 324 - 351 |
Type of Material: | Journal Article |
Journal/Proceeding Title: | The Annals of Statistics |
Version: | Author's manuscript |
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.