Skip to main content

Novelty and Inductive Generalization in Human Reinforcement Learning

Author(s): Gershman, Samuel J.; Niv, Yael

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1sq9b
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGershman, Samuel J.-
dc.contributor.authorNiv, Yael-
dc.date.accessioned2019-10-28T15:54:34Z-
dc.date.available2019-10-28T15:54:34Z-
dc.date.issued2015-07en_US
dc.identifier.citationGershman, Samuel J, Niv, Yael. (2015). Novelty and Inductive Generalization in Human Reinforcement Learning. Topics in Cognitive Science, 7 (3), 391 - 415. doi:10.1111/tops.12138en_US
dc.identifier.issn1756-8757-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1sq9b-
dc.description.abstractIn reinforcement learning, a decision maker searching for the most rewarding option is often faced with the question: what is the value of an option that has never been tried before? One way to frame this question is as an inductive problem: how can I generalize my previous experience with one set of options to a novel option? We show how hierarchical Bayesian inference can be used to solve this problem, and describe an equivalence between the Bayesian model and temporal difference learning algorithms that have been proposed as models of reinforcement learning in humans and animals. According to our view, the search for the best option is guided by abstract knowledge about the relationships between different options in an environment, resulting in greater search efficiency compared to traditional reinforcement learning algorithms previously applied to human cognition. In two behavioral experiments, we test several predictions of our model, providing evidence that humans learn and exploit structured inductive knowledge to make predictions about novel options. In light of this model, we suggest a new interpretation of dopaminergic responses to novelty.en_US
dc.format.extent391 - 415en_US
dc.language.isoen_USen_US
dc.relation.ispartofTopics in Cognitive Scienceen_US
dc.rightsAuthor's manuscripten_US
dc.titleNovelty and Inductive Generalization in Human Reinforcement Learningen_US
dc.typeJournal Articleen_US
dc.identifier.doidoi:10.1111/tops.12138-
dc.date.eissued2015-03-23en_US
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
nihms668168.pdf1.83 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.