Skip to main content

Distributed cooperative decision-making in multiarmed bandits: Frequentist and Bayesian algorithms

Author(s): Landgren, P; Srivastava, V; Leonard, NE

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr15t0j
Abstract: © 2016 IEEE. We study distributed cooperative decision-making under the explore-exploit tradeoff in the multiarmed bandit (MAB) problem. We extend state-of-the-art frequentist and Bayesian algorithms for single-agent MAB problems to cooperative distributed algorithms for multi-agent MAB problems in which agents communicate according to a fixed network graph. We rely on a running consensus algorithm for each agent's estimation of mean rewards from its own rewards and the estimated rewards of its neighbors. We prove the performance of these algorithms and show that they asymptotically recover the performance of a centralized agent. Further, we rigorously characterize the influence of the communication graph structure on the decision-making performance of the group.
Publication Date: 27-Dec-2016
Citation: Landgren, P, Srivastava, V, Leonard, NE. (2016). Distributed cooperative decision-making in multiarmed bandits: Frequentist and Bayesian algorithms. 2016 IEEE 55th Conference on Decision and Control, CDC 2016, 167 - 172. doi:10.1109/CDC.2016.7798264
DOI: doi:10.1109/CDC.2016.7798264
Pages: 167 - 172
Type of Material: Conference Proceeding
Journal/Proceeding Title: 2016 IEEE 55th Conference on Decision and Control, CDC 2016



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.