Skip to main content

Multi-armed Bandit Problems with Strategic Arms

Author(s): Braverman, Mark; Mao, Jieming; Schneider, Jon; Weinberg, S Matthew

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr16j9k
Full metadata record
DC FieldValueLanguage
dc.contributor.authorBraverman, Mark-
dc.contributor.authorMao, Jieming-
dc.contributor.authorSchneider, Jon-
dc.contributor.authorWeinberg, S Matthew-
dc.date.accessioned2021-10-08T19:48:00Z-
dc.date.available2021-10-08T19:48:00Z-
dc.date.issued2019en_US
dc.identifier.citationBraverman, Mark, Jieming Mao, Jon Schneider, and S. Matthew Weinberg. "Multi-armed Bandit Problems with Strategic Arms." In Conference on Learning Theory 99 (2019): pp. 383-416.en_US
dc.identifier.issn2640-3498-
dc.identifier.urihttp://proceedings.mlr.press/v99/braverman19b.html-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr16j9k-
dc.description.abstractWe study a strategic version of the multi-armed bandit problem, where each arm is an individual strategic agent and we, the principal, pull one arm each round. When pulled, the arm receives some private reward 𝑣𝑎 and can choose an amount 𝑥𝑎 to pass on to the principal (keeping 𝑣𝑎−𝑥𝑎 for itself). All non-pulled arms get reward 0. Each strategic arm tries to maximize its own utility over the course of 𝑇 rounds. Our goal is to design an algorithm for the principal incentivizing these arms to pass on as much of their private rewards as possible. When private rewards are stochastically drawn each round (𝑣𝑡𝑎←𝐷𝑎), we show that: \begin{itemize} \item Algorithms that perform well in the classic adversarial multi-armed bandit setting necessarily perform poorly: For all algorithms that guarantee low regret in an adversarial setting, there exist distributions $D_1,\ldots,D_k$ and an $o(T)$-approximate Nash equilibrium for the arms where the principal receives reward $o(T)$. \item There exists an algorithm for the principal that induces a game among the arms where each arm has a dominant strategy. Moreover, for every $o(T)$-approximate Nash equilibrium, the principal receives expected reward $\mu’T - o(T)$, where $\mu’$ is the second-largest of the means $\mathbb{E}[D_{a}]$. This algorithm maintains its guarantee if the arms are non-strategic ($x_a = v_a$), and also if there is a mix of strategic and non-strategic arms. \end{itemize}en_US
dc.format.extent383 - 416en_US
dc.language.isoen_USen_US
dc.relation.ispartofConference on Learning Theoryen_US
dc.rightsFinal published version. Article is made available in OAR by the publisher's permission or policy.en_US
dc.titleMulti-armed Bandit Problems with Strategic Armsen_US
dc.typeConference Articleen_US
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
MultiArmedBanditProblemsStrategicArms.pdf409.2 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.