Skip to main content

Deviation optimal learning using greedy Q-aggregation

Author(s): Dai, Dong; Rigollet, Philippe; Zhang, Tong

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1x50x
Abstract: Given a finite family of functions, the goal of model selection aggregation is to construct a procedure that mimics the function from this family that is the closest to an unknown regression function. More precisely, we consider a general regression model with fixed design and measure the distance between functions by the mean squared error at the design points. While procedures based on exponential weights are known to solve the problem of model selection aggregation in expectation, they are, surprisingly, sub-optimal in deviation. We propose a new formulation called Q-aggregation that addresses this limitation; namely, its solution leads to sharp oracle inequalities that are optimal in a minimax sense. Moreover, based on the new formulation, we design greedy Q-aggregation procedures that produce sparse aggregation models achieving the optimal rate. The convergence and performance of these greedy procedures are illustrated and compared with other standard methods on simulated examples.
Publication Date: Jun-2012
Citation: Dai, Dong, Rigollet, Philippe, Zhang, Tong. (2012). Deviation optimal learning using greedy Q-aggregation. The Annals of Statistics, 40 (3), 1878 - 1905. doi:10.1214/12-AOS1025
DOI: doi:10.1214/12-AOS1025
ISSN: 0090-5364
Pages: 1878 - 1905
Type of Material: Journal Article
Journal/Proceeding Title: The Annals of Statistics
Version: Author's manuscript



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.