Skip to main content

Provably Efficient Maximum Entropy Exploration

Author(s): Hazan, Elad; Kakade, Sham; Singh, Karan; van Soest, Abby

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr10v73
Abstract: Suppose an agent is in a (possibly unknown) Markov Decision Process in the absence of a reward signal, what might we hope that an agent can efficiently learn to do? This work studies a broad class of objectives that are defined solely as functions of the state-visitation frequencies that are induced by how the agent behaves. For example, one natural, intrinsically defined, objective problem is for the agent to learn a policy which induces a distribution over state space that is as uniform as possible, which can be measured in an entropic sense. We provide an efficient algorithm to optimize such such intrinsically defined objectives, when given access to a black box planning oracle (which is robust to function approximation). Furthermore, when restricted to the tabular setting where we have sample based access to the MDP, our proposed algorithm is provably efficient, both in terms of its sample and computational complexities. Key to our algorithmic methodology is utilizing the conditional gradient method (a.k.a. the Frank-Wolfe algorithm) which utilizes an approximate MDP solver.
Publication Date: 2019
Citation: Hazan, Elad, Sham Kakade, Karan Singh, and Abby Van Soest. "Provably Efficient Maximum Entropy Exploration." In Proceedings of the 36th International Conference on Machine Learning (2019): pp. 2681-2691.
ISSN: 2640-3498
Pages: 2681 - 2691
Type of Material: Conference Article
Journal/Proceeding Title: Proceedings of the 36th International Conference on Machine Learning
Version: Final published version. Article is made available in OAR by the publisher's permission or policy.



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.