Skip to main content

Evolving Graphical Planner: Contextual Global Planning for Vision-and-Language Navigation

Author(s): Deng, Zhiwei; Narasimhan, Karthik; Russakovsky, Olga

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1jw03
Full metadata record
DC FieldValueLanguage
dc.contributor.authorDeng, Zhiwei-
dc.contributor.authorNarasimhan, Karthik-
dc.contributor.authorRussakovsky, Olga-
dc.date.accessioned2021-10-08T19:51:14Z-
dc.date.available2021-10-08T19:51:14Z-
dc.date.issued2020en_US
dc.identifier.citationDeng, Zhiwei, Karthik Narasimhan, and Olga Russakovsky. "Evolving Graphical Planner: Contextual Global Planning for Vision-and-Language Navigation." Advances in Neural Information Processing Systems 33 (2020): pp. 20660–20672.en_US
dc.identifier.issn1049-5258-
dc.identifier.urihttps://proceedings.neurips.cc/paper/2020/hash/eddb904a6db773755d2857aacadb1cb0-Abstract.html-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1jw03-
dc.description.abstractThe ability to perform effective planning is crucial for building an instruction-following agent. When navigating through a new environment, an agent is challenged with (1) connecting the natural language instructions with its progressively growing knowledge of the world; and (2) performing long-range planning and decision making in the form of effective exploration and error correction. Current methods are still limited on both fronts despite extensive efforts. In this paper, we introduce Evolving Graphical Planner (EGP), a module that allows global planning for navigation based on raw sensory input. The module dynamically constructs a graphical representation, generalizes the local action space to allow for more flexible decision making, and performs efficient planning on a proxy representation. We demonstrate our model on a challenging Vision-and-Language Navigation (VLN) task with photorealistic images, and achieve superior performance compared to previous navigation architectures. Concretely, we achieve 53% success rate on the test split of Room-to-Room navigation task (Anderson et al.) through pure imitation learning, outperforming previous architectures by up to 5%.en_US
dc.format.extent20660 - 20672en_US
dc.language.isoen_USen_US
dc.relation.ispartofAdvances in Neural Information Processing Systemsen_US
dc.rightsFinal published version. Article is made available in OAR by the publisher's permission or policy.en_US
dc.titleEvolving Graphical Planner: Contextual Global Planning for Vision-and-Language Navigationen_US
dc.typeConference Articleen_US
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
GraphicalPlanner.pdf5.84 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.