Skip to main content

Contextual bandits with linear Payoff functions

Author(s): Chu, W; Li, L; Reyzin, L; Schapire, Robert E

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr18v6f
Abstract: In this paper we study the contextual bandit problem (also known as the multi-armed bandit problem with expert advice) for linear Payoff functions. For T rounds, K actions, and d dimensional feature vectors, we prove an O(√Td ln 3(KT ln(T)/δ)) regret bound that holds with probability 1-δ for the simplest known (both conceptually and computationally) efficient upper confidence bound algorithm for this problem. We also prove a lower bound of Ω( √Td) for this setting, matching the upper bound up to logarithmic factors. Copyright 2011 by the authors.
Publication Date: 1-Dec-2011
Citation: Chu, W, Li, L, Reyzin, L, Schapire, RE. (2011). Contextual bandits with linear Payoff functions. Journal of Machine Learning Research, 15 (208 - 214
ISSN: 1532-4435
EISSN: 1533-7928
Pages: 208 - 214
Type of Material: Conference Article
Journal/Proceeding Title: Journal of Machine Learning Research
Version: Final published version. This is an open access article.



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.