Skip to main content

Take the scenic route: Improving generalization in vision-and-language navigation

Author(s): Yu, F; Deng, Z; Narasimhan, Karthik; Russakovsky, O

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1vv7q
Abstract: © 2020 IEEE. In the Vision-and-Language Navigation (VLN) task, an agent with egocentric vision navigates to a destination given natural language instructions. The act of manually annotating these instructions is timely and expensive, such that many existing approaches automatically generate additional samples to improve agent performance. However, these approaches still have difficulty generalizing their performance to new environments. In this work, we investigate the popular Room-to-Room (R2R) VLN benchmark and discover that what is important is not only the amount of data you synthesize, but also how you do it. We find that shortest path sampling, which is used by both the R2R benchmark and existing augmentation methods, encode biases in the action space of the agent which we dub as action priors. We then show that these action priors offer one explanation toward the poor generalization of existing works. To mitigate such priors, we propose a path sampling method based on random walks to augment the data. By training with this augmentation strategy, our agent is able to generalize better to unknown environments compared to the baseline, significantly improving model performance in the process.
Publication Date: 1-Jun-2020
Citation: Yu, F, Deng, Z, Narasimhan, K, Russakovsky, O. (2020). Take the scenic route: Improving generalization in vision-and-language navigation. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2020-June (4000 - 4004. doi:10.1109/CVPRW50498.2020.00468
DOI: doi:10.1109/CVPRW50498.2020.00468
ISSN: 2160-7508
EISSN: 2160-7516
Pages: 4000 - 4004
Type of Material: Conference Article
Journal/Proceeding Title: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
Version: Author's manuscript



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.