Skip to main content

On distributed cooperative decision-making in multiarmed bandits

Author(s): Landgren, P; Srivastava, V; Leonard, NE

To refer to this page use:
Abstract: © 2016 EUCA. We study the explore-exploit tradeoff in distributed cooperative decision-making using the context of the multiarmed bandit (MAB) problem. For the distributed cooperative MAB problem, we design the cooperative UCB algorithm that comprises two interleaved distributed processes: (i) running consensus algorithms for estimation of rewards, and (ii) upper-confidence-bound-based heuristics for selection of arms. We rigorously analyze the performance of the cooperative UCB algorithm and characterize the influence of communication graph structure on the decision-making performance of the group.
Publication Date: 6-Jan-2017
Citation: Landgren, P, Srivastava, V, Leonard, NE. (2017). On distributed cooperative decision-making in multiarmed bandits. 2016 European Control Conference, ECC 2016, 243 - 248. doi:10.1109/ECC.2016.7810293
DOI: doi:10.1109/ECC.2016.7810293
Pages: 243 - 248
Type of Material: Conference Proceeding
Journal/Proceeding Title: 2016 European Control Conference, ECC 2016

Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.