Modeling Human Decision Making in Generalized Gaussian Multiarmed Bandits
Author(s): Reverdy, Paul B; Srivastava, Vaibhav; Leonard, Naomi Ehrich
DownloadTo refer to this page use:
http://arks.princeton.edu/ark:/88435/pr1gs1p
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Reverdy, Paul B | - |
dc.contributor.author | Srivastava, Vaibhav | - |
dc.contributor.author | Leonard, Naomi Ehrich | - |
dc.date.accessioned | 2021-10-08T20:20:05Z | - |
dc.date.available | 2021-10-08T20:20:05Z | - |
dc.date.issued | 2014-04 | en_US |
dc.identifier.citation | Reverdy, Paul B, Srivastava, Vaibhav, Leonard, Naomi Ehrich. (2014). Modeling Human Decision Making in Generalized Gaussian Multiarmed Bandits. Proceedings of the IEEE, 102 (4), 544 - 571. doi:10.1109/JPROC.2014.2307024 | en_US |
dc.identifier.issn | 0018-9219 | - |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/pr1gs1p | - |
dc.description.abstract | In this paper, we present a formal model of human decision making in explore-exploit tasks using the context of multiarmed bandit problems, where the decision maker must choose among multiple options with uncertain rewards. We address the standard multiarmed bandit problem, the multiarmed bandit problem with transition costs, and the multiarmed bandit problem on graphs. We focus on the case of Gaussian rewards in a setting where the decision maker uses Bayesian inference to estimate the reward values. We model the decision maker's prior knowledge with the Bayesian prior on the mean reward. We develop the upper-credible-limit (UCL) algorithm for the standard multiarmed bandit problem and show that this deterministic algorithm achieves logarithmic cumulative expected regret, which is optimal performance for uninformative priors. We show how good priors and good assumptions on the correlation structure among arms can greatly enhance decision-making performance, even over short time horizons. We extend to the stochastic UCL algorithm and draw several connections to human decision-making behavior. We present empirical data from human experiments and show that human performance is efficiently captured by the stochastic UCL algorithm with appropriate parameters. For the multiarmed bandit problem with transition costs and the multiarmed bandit problem on graphs, we generalize the UCL algorithm to the block UCL algorithm and the graphical block UCL algorithm, respectively. We show that these algorithms also achieve logarithmic cumulative expected regret and require a sublogarithmic expected number of transitions among arms. We further illustrate the performance of these algorithms with numerical examples. | en_US |
dc.format.extent | 544 - 571 | en_US |
dc.language.iso | en_US | en_US |
dc.relation.ispartof | Proceedings of the IEEE | en_US |
dc.rights | Author's manuscript | en_US |
dc.title | Modeling Human Decision Making in Generalized Gaussian Multiarmed Bandits | en_US |
dc.type | Journal Article | en_US |
dc.identifier.doi | doi:10.1109/JPROC.2014.2307024 | - |
dc.identifier.eissn | 1558-2256 | - |
pu.type.symplectic | http://www.symplectic.co.uk/publications/atom-terms/1.0/journal-article | en_US |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Modeling human decision making in generalized gaussian multiarmed bandits.pdf | 827.88 kB | Adobe PDF | View/Download | |
Modeling human decision making in generalized gaussian multiarmed bandits.pdf | 827.61 kB | Adobe PDF | View/Download |
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.