Skip to main content

Provable Representation Learning for Imitation Learning via Bi-level Optimization

Author(s): Arora, Sanjeev; Du, Simon; Kakade, Sham; Luo, Yuping; Saunshi, Nikunj

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1xg1r
Abstract: A common strategy in modern learning systems is to learn a representation that is useful for many tasks, a.k.a. representation learning. We study this strategy in the imitation learning setting for Markov decision processes (MDPs) where multiple experts’ trajectories are available. We formulate representation learning as a bi-level optimization problem where the “outer" optimization tries to learn the joint representation and the “inner" optimization encodes the imitation learning setup and tries to learn task-specific parameters. We instantiate this framework for the imitation learning settings of behavior cloning and observation-alone. Theoretically, we show using our framework that representation learning can provide sample complexity benefits for imitation learning in both settings. We also provide proof-of-concept experiments to verify our theory.
Publication Date: 2020
Citation: Arora, Sanjeev, Simon Du, Sham Kakade, Yuping Luo, and Nikunj Saunshi. "Provable Representation Learning for Imitation Learning via Bi-level Optimization." In International Conference on Machine Learning (2020): pp. 367-376.
ISSN: 2640-3498
Pages: 367 - 376
Type of Material: Conference Article
Journal/Proceeding Title: International Conference on Machine Learning
Version: Final published version. Article is made available in OAR by the publisher's permission or policy.



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.