Skip to main content

Optimal Behavioral Hierarchy

Author(s): Solway, Alec; Diuk, Carlos; Córdova, Natalia; Yee, Debbie; Barto, Andrew G.; et al

To refer to this page use:
Abstract: In reinforcement learning, a decision maker searching for the most rewarding option is often faced with the question: what is the value of an option that has never been tried before? One way to frame this question is as an inductive problem: how can I generalize my previous experience with one set of options to a novel option? We show how hierarchical Bayesian inference can be used to solve this problem, and describe an equivalence between the Bayesian model and temporal difference learning algorithms that have been proposed as models of reinforcement learning in humans and animals. According to our view, the search for the best option is guided by abstract knowledge about the relationships between different options in an environment, resulting in greater search efficiency compared to traditional reinforcement learning algorithms previously applied to human cognition. In two behavioral experiments, we test several predictions of our model, providing evidence that humans learn and exploit structured inductive knowledge to make predictions about novel options. In light of this model, we suggest a new interpretation of dopaminergic responses to novelty.
Publication Date: 14-Aug-2014
Electronic Publication Date: 14-Aug-2014
Citation: Solway, Alec, Diuk, Carlos, Córdova, Natalia, Yee, Debbie, Barto, Andrew G, Niv, Yael, Botvinick, Matthew M. (2014). Optimal Behavioral Hierarchy. PLoS Computational Biology, 10 (8), e1003779 - e1003779. doi:10.1371/journal.pcbi.1003779
DOI: doi:10.1371/journal.pcbi.1003779
EISSN: 1553-7358
Type of Material: Journal Article
Journal/Proceeding Title: PLoS Computational Biology
Version: Final published version. This is an open access article.

Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.