Skip to main content

Contextual Bandit Algorithms with Supervised Learning Guarantees

Author(s): Beygelzimer, Alina; Langford, John; Li, Lihong; Reyzin, Lev; Schapire, Robert E

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1jc0d
Abstract: We address the problem of learning in an online, bandit setting where the learner must repeatedly select among $K$ actions, but only receives partial feedback based on its choices. We establish two new facts: First, using a new algorithm called Exp4.P, we show that it is possible to compete with the best in a set of $N$ experts with probability $1-\delta$ while incurring regret at most $O(\sqrt{KT\ln(N/\delta)})$ over $T$ time steps. The new algorithm is tested empirically in a large-scale, real-world dataset. Second, we give a new algorithm called VE that competes with a possibly infinite set of policies of VC-dimension $d$ while incurring regret at most $O(\sqrt{T(d\ln(T) + \ln (1/\delta))})$ with probability $1-\delta$. These guarantees improve on those of all previous algorithms, whether in a stochastic or adversarial environment, and bring us closer to providing supervised learning type guarantees for the contextual bandit setting.
Publication Date: 2011
Citation: Beygelzimer, Alina, Langford, John, Li, Lihong, Reyzin, Lev, Schapire, Robert E. (Contextual Bandit Algorithms with Supervised Learning Guarantees
Type of Material: Conference Article
Journal/Proceeding Title: Journal of Machine Learning Research
Version: Final published version. This is an open access article.



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.