Skip to main content

On the theory of policy gradient methods: Optimality, approximation, and distribution shift

Author(s): Agarwal, A; Kakade, SM; Lee, JD; Mahajan, G

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1sb3wz6z
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAgarwal, A-
dc.contributor.authorKakade, SM-
dc.contributor.authorLee, JD-
dc.contributor.authorMahajan, G-
dc.date.accessioned2024-01-21T19:38:05Z-
dc.date.available2024-01-21T19:38:05Z-
dc.date.issued2021-02en_US
dc.identifier.citationAgarwal, A, Kakade, SM, Lee, JD, Mahajan, G. (2021). On the theory of policy gradient methods: Optimality, approximation, and distribution shift. Journal of Machine Learning Research, 22en_US
dc.identifier.issn1532-4435-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1sb3wz6z-
dc.description.abstractPolicy gradient methods are among the most effective methods in challenging reinforcement learning problems with large state and/or action spaces. However, little is known about even their most basic theoretical convergence properties, including: If and how fast they converge to a globally optimal solution or how they cope with approximation error due to using a restricted class of parametric policies. This work provides provable characterizations of the computational, approximation, and sample size properties of policy gradient methods in the context of discounted Markov Decision Processes (MDPs). We focus on both: "tabular" policy parameterizations, where the optimal policy is contained in the class and where we show global convergence to the optimal policy; and parametric policy classes (considering both log-linear and neural policy classes), which may not contain the optimal policy and where we provide agnostic learning results. One central contribution of this work is in providing approximation guarantees that are average case-which avoid explicit worst-case dependencies on the size of state space-by making a formal connection to supervised learning under distribution shift. This characterization shows an important interplay between estimation error, approximation error, and exploration (as characterized through a precisely defined condition number).en_US
dc.language.isoen_USen_US
dc.relation.ispartofJournal of Machine Learning Researchen_US
dc.rightsFinal published version. This is an open access article.en_US
dc.titleOn the theory of policy gradient methods: Optimality, approximation, and distribution shiften_US
dc.typeJournal Articleen_US
dc.identifier.eissn1533-7928-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
19-736.pdf638.89 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.