Skip to main content

Contextual Bandit Learning with Predictable Rewards

Author(s): Agarwal, Alekh; Dudík, Miroslav; Kale, Satyen; Langford, John; Schapire, Robert E

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1dj87
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAgarwal, Alekh-
dc.contributor.authorDudík, Miroslav-
dc.contributor.authorKale, Satyen-
dc.contributor.authorLangford, John-
dc.contributor.authorSchapire, Robert E-
dc.date.accessioned2021-10-08T19:47:21Z-
dc.date.available2021-10-08T19:47:21Z-
dc.date.issued2012en_US
dc.identifier.citationAgarwal, Alekh, Dudík, Miroslav, Kale, Satyen, Langford, John, Schapire, Robert E. (Contextual Bandit Learning with Predictable Rewardsen_US
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1dj87-
dc.description.abstractContextual bandit learning is a reinforcement learning problem where the learner repeatedly receives a set of features (context), takes an action and receives a reward based on the action and context. We consider this problem under a realizability assumption: there exists a function in a (known) function class, always capable of predicting the expected reward, given the action and context. Under this assumption, we show three things. We present a new algorithm---Regressor Elimination--- with a regret similar to the agnostic setting (i.e. in the absence of realizability assumption). We prove a new lower bound showing no algorithm can achieve superior performance in the worst case even with the realizability assumption. However, we do show that for any set of policies (mapping contexts to actions), there is a distribution over rewards (given context) such that our new algorithm has constant regret unlike the previous approaches.en_US
dc.language.isoen_USen_US
dc.relation.ispartof15th International Conference on Artificial Intelligence and Statistics (AISTATS) 2012en_US
dc.rightsFinal published version. This is an open access article.en_US
dc.titleContextual Bandit Learning with Predictable Rewardsen_US
dc.typeConference Articleen_US
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
ContextualBanditLearningPredictableRewards.pdf281.53 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.