Skip to main content

Satisficing in Multi-Armed Bandit Problems

Author(s): Reverdy, P; Srivastava, V; Leonard, Naomi E

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1hs11
Full metadata record
DC FieldValueLanguage
dc.contributor.authorReverdy, P-
dc.contributor.authorSrivastava, V-
dc.contributor.authorLeonard, Naomi E-
dc.date.accessioned2021-10-08T20:20:01Z-
dc.date.available2021-10-08T20:20:01Z-
dc.date.issued2017en_US
dc.identifier.citationReverdy, P, Srivastava, V, Leonard, NE. (2017). Satisficing in Multi-Armed Bandit Problems. IEEE Transactions on Automatic Control, 62 (3788 - 3803. doi:10.1109/TAC.2016.2644380en_US
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1hs11-
dc.description.abstractSatisficing is a relaxation of maximizing and allows for less risky decision making in the face of uncertainty. We propose two sets of satisficing objectives for the multi-armed bandit problem, where the objective is to achieve reward-based decision-making performance above a given threshold. We show that these new problems are equivalent to various standard multi-armed bandit problems with maximizing objectives and use the equivalence to find bounds on performance. The different objectives can result in qualitatively different behavior; for example, agents explore their options continually in one case and only a finite number of times in another. For the case of Gaussian rewards we show an additional equivalence between the two sets of satisficing objectives that allows algorithms developed for one set to be applied to the other. We then develop variants of the Upper Credible Limit (UCL) algorithm that solve the problems with satisficing objectives and show that these modified UCL algorithms achieve efficient satisficing performance.en_US
dc.format.extent3788 - 3803en_US
dc.language.isoen_USen_US
dc.relation.ispartofIEEE Transactions on Automatic Controlen_US
dc.rightsAuthor's manuscripten_US
dc.titleSatisficing in Multi-Armed Bandit Problemsen_US
dc.typeJournal Articleen_US
dc.identifier.doidoi:10.1109/TAC.2016.2644380-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
Satisficing in Multi-Armed Bandit Problems.pdf833.98 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.