Skip to main content

Offline replay supports planning in human reinforcement learning.

Author(s): Momennejad, Ida; Otto, A Ross.; Daw, Nathaniel D.; Norman, Kenneth A.

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1gf29
Abstract: Making decisions in sequentially structured tasks requires integrating distally acquired information. The extensive computational cost of such integration challenges planning methods that integrate online, at decision time. Furthermore, it remains unclear whether ‘offline’ integration during replay supports planning, and if so which memories should be replayed. Inspired by machine learning, we propose that (a) offline replay of trajectories facilitates integrating representations that guide decisions, and (b) unsigned prediction errors (uncertainty) trigger such integrative replay. We designed a 2-step revaluation task for fMRI, whereby participants needed to integrate changes in rewards with past knowledge to optimally replan decisions. As predicted, we found that (a) multi-voxel pattern evidence for off-task replay predicts subsequent replanning; (b) neural sensitivity to uncertainty predicts subsequent replay and replanning; (c) off-task hippocampus and anterior cingulate activity increase when revaluation is required. These findings elucidate how the brain leverages offline mechanisms in planning and goal-directed behavior under uncertainty.
Publication Date: 14-Dec-2018
Citation: Momennejad, Ida, Otto, A Ross, Daw, Nathaniel D, Norman, Kenneth A. (2018). Offline replay supports planning in human reinforcement learning.. eLife, 7 (10.7554/eLife.32548)
DOI: doi:10.7554/eLife.32548
ISSN: 2050-084X
EISSN: 2050-084X
Language: eng
Type of Material: Journal Article
Journal/Proceeding Title: eLife
Version: Final published version. Article is made available in OAR by the publisher's permission or policy.



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.