To refer to this page use:
|Abstract:||The successor representation was introduced into reinforcement learning by Dayan (1993) as a means of facilitating generalization between states with similar successors. Although reinforcement learning in general has been used extensively as a model of psychological and neural processes, the psychological validity of the successor representation has yet to be explored. An interesting possibility is that the successor representation can be used not only for reinforcement learning but for episodic learning as well. Our main contribution is to show that a variant of the temporal context model (TCM; Howard & Kahana, 2002), an influential model of episodic memory, can be understood as directly estimating the successor representation using the temporal difference learning algorithm (Sutton & Barto, 1998). This insight leads to a generalization of TCM and new experimental predictions. In addition to casting a new normative light on TCM, this equivalence suggests a previously unexplored point of contact between different learning systems.|
|Citation:||Gershman, Samuel J, Moore, Christopher D, Todd, Michael T, Norman, Kenneth A, Sederberg, Per B. (2012). The Successor Representation and Temporal Context. Neural Computation, 24 (6), 1553 - 1568. doi:10.1162/NECO_a_00282|
|Pages:||1553 - 1568|
|Type of Material:||Journal Article|
|Journal/Proceeding Title:||Neural Computation|
|Version:||Final published version. Article is made available in OAR by the publisher's permission or policy.|
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.