Skip to main content

Contextual bandits with linear Payoff functions

Author(s): Chu, W; Li, L; Reyzin, L; Schapire, Robert E

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr18v6f
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChu, W-
dc.contributor.authorLi, L-
dc.contributor.authorReyzin, L-
dc.contributor.authorSchapire, Robert E-
dc.date.accessioned2021-10-08T19:47:21Z-
dc.date.available2021-10-08T19:47:21Z-
dc.date.issued2011-12-01en_US
dc.identifier.citationChu, W, Li, L, Reyzin, L, Schapire, RE. (2011). Contextual bandits with linear Payoff functions. Journal of Machine Learning Research, 15 (208 - 214en_US
dc.identifier.issn1532-4435-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr18v6f-
dc.description.abstractIn this paper we study the contextual bandit problem (also known as the multi-armed bandit problem with expert advice) for linear Payoff functions. For T rounds, K actions, and d dimensional feature vectors, we prove an O(√Td ln 3(KT ln(T)/δ)) regret bound that holds with probability 1-δ for the simplest known (both conceptually and computationally) efficient upper confidence bound algorithm for this problem. We also prove a lower bound of Ω( √Td) for this setting, matching the upper bound up to logarithmic factors. Copyright 2011 by the authors.en_US
dc.format.extent208 - 214en_US
dc.language.isoen_USen_US
dc.relation.ispartofJournal of Machine Learning Researchen_US
dc.rightsFinal published version. This is an open access article.en_US
dc.titleContextual bandits with linear Payoff functionsen_US
dc.typeConference Articleen_US
dc.identifier.eissn1533-7928-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
ContextualBanditsLinearPayoffFunctions.pdf1.3 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.