Skip to main content

The multi-armed bandit problem with covariates

Author(s): Perchet, Vianney; Rigollet, Philippe

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1kj64
Abstract: We consider a multi-armed bandit problem in a setting where each arm produces a noisy reward realization which depends on an observable random covariate. As opposed to the traditional static multi-armed bandit problem, this setting allows for dynamically changing rewards that better describe applications where side information is available. We adopt a nonparametric model where the expected rewards are smooth functions of the covariate and where the hardness of the problem is captured by a margin parameter. To maximize the expected cumulative reward, we introduce a policy called Adaptively Binned Successive Elimination (ABSE) that adaptively decomposes the global problem into suitably “localized” static bandit problems.This policy constructs an adaptive partition using a variant of the Successive Elimination (SE) policy. Our results include sharper regret bounds for the SE policy in a static bandit problem and minimax optimal regret bounds for the ABSE policy in the dynamic problem.
Publication Date: Apr-2013
Citation: Perchet, Vianney, Rigollet, Philippe. (2013). The multi-armed bandit problem with covariates. The Annals of Statistics, 41 (2), 693 - 721. doi:10.1214/13-AOS1101
DOI: doi:10.1214/13-AOS1101
ISSN: 0090-5364
Pages: 693 - 721
Type of Material: Journal Article
Journal/Proceeding Title: The Annals of Statistics
Version: Final published version. Article is made available in OAR by the publisher's permission or policy.



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.