Skip to main content

Evolving Graphical Planner: Contextual Global Planning for Vision-and-Language Navigation

Author(s): Deng, Zhiwei; Narasimhan, Karthik; Russakovsky, Olga

To refer to this page use:
Abstract: The ability to perform effective planning is crucial for building an instruction-following agent. When navigating through a new environment, an agent is challenged with (1) connecting the natural language instructions with its progressively growing knowledge of the world; and (2) performing long-range planning and decision making in the form of effective exploration and error correction. Current methods are still limited on both fronts despite extensive efforts. In this paper, we introduce Evolving Graphical Planner (EGP), a module that allows global planning for navigation based on raw sensory input. The module dynamically constructs a graphical representation, generalizes the local action space to allow for more flexible decision making, and performs efficient planning on a proxy representation. We demonstrate our model on a challenging Vision-and-Language Navigation (VLN) task with photorealistic images, and achieve superior performance compared to previous navigation architectures. Concretely, we achieve 53% success rate on the test split of Room-to-Room navigation task (Anderson et al.) through pure imitation learning, outperforming previous architectures by up to 5%.
Publication Date: 2020
Citation: Deng, Zhiwei, Karthik Narasimhan, and Olga Russakovsky. "Evolving Graphical Planner: Contextual Global Planning for Vision-and-Language Navigation." Advances in Neural Information Processing Systems 33 (2020): pp. 20660–20672.
ISSN: 1049-5258
Pages: 20660 - 20672
Type of Material: Conference Article
Journal/Proceeding Title: Advances in Neural Information Processing Systems
Version: Final published version. Article is made available in OAR by the publisher's permission or policy.

Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.