Skip to main content

A Dynamic Observation Strategy for Multi-agent Multi-armed Bandit Problem

Author(s): Madhushani, Udari; Leonard, Naomi

To refer to this page use:
Abstract: We define and analyze a multi-agent multi-armed bandit problem in which decision-making agents can observe the choices and rewards of their neighbors under a linear observation cost. Neighbors are defined by a network graph that encodes the inherent observation constraints of the system. We define a cost associated with observations such that at every instance an agent makes an observation it receives a constant observation regret. We design a sampling algorithm and an observation protocol for each agent to maximize its own expected cumulative reward through minimizing expected cumulative sampling regret and expected cumulative observation regret. For our proposed protocol, we prove that total cumulative regret is logarithmically bounded. We verify the accuracy of analytical bounds using numerical simulations.
Publication Date: 2020
Citation: Madhushani, Udari, and Naomi Ehrich Leonard. "A Dynamic Observation Strategy for Multi-agent Multi-armed Bandit Problem." In European Control Conference (ECC) (2020): pp. 1677-1682. doi:10.23919/ECC51009.2020.9143736
DOI: 10.23919/ECC51009.2020.9143736
Pages: 1677 - 1682
Type of Material: Conference Article
Journal/Proceeding Title: European Control Conference (ECC)
Version: Author's manuscript

Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.