Skip to main content

When Is Partially Observable Reinforcement Learning Not Scary?

Author(s): Liu, Qinghua; Chung, Alan; Szepesvári, Csaba; Jin, Chi

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1cz32516
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLiu, Qinghua-
dc.contributor.authorChung, Alan-
dc.contributor.authorSzepesvári, Csaba-
dc.contributor.authorJin, Chi-
dc.date.accessioned2024-01-07T15:54:39Z-
dc.date.available2024-01-07T15:54:39Z-
dc.date.issued2022en_US
dc.identifier.citationLiu, Qinghua, Chung, Alan, Szepesvári, Csaba, Jin, Chi. (2022). When Is Partially Observable Reinforcement Learning Not Scary?en_US
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1cz32516-
dc.description.abstractApplications of Reinforcement Learning (RL), in which agents learn to make a sequence of decisions despite lacking complete information about the latent states of the controlled system, that is, they act under partial observability of the states, are ubiquitous. Partially observable RL can be notoriously difficult—well-known information-theoretic results show that learning partially observable Markov decision processes (POMDPs) requires an exponential number of samples in the worst case. Yet, this does not rule out the existence of large subclasses of POMDPs over which learning is tractable. In this paper we identify such a subclass, which we call weakly revealing POMDPs. This family rules out the pathological instances of POMDPs where observations are uninformative to a degree that makes learning hard. We prove that for weakly revealing POMDPs, a simple algorithm combining optimism and Maximum Likelihood Estimation (MLE) is sufficient to guarantee polynomial sample complexity. To the best of our knowledge, this is the first provably sample-efficient result for learning from interactions in overcomplete POMDPs, where the number of latent states can be larger than the number of observations.en_US
dc.language.isoen_USen_US
dc.relation.ispartofProceedings of Machine Learning Researchen_US
dc.rightsFinal published version. This is an open access article.en_US
dc.titleWhen Is Partially Observable Reinforcement Learning Not Scary?en_US
dc.typeConference Articleen_US
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
liu22f.pdf478.8 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.