Shape Anchors for Data-Driven Multi-view Reconstruction
Author(s): Owens, Andrew; Xiao, Jianxiong; Torralba, Antonio; Freeman, William
DownloadTo refer to this page use:
http://arks.princeton.edu/ark:/88435/pr1pc16
Abstract: | We present a data-driven method for building dense 3D reconstructions using a combination of recognition and multi-view cues. Our approach is based on the idea that there are image patches that are so distinctive that we can accurately estimate their latent 3D shapes solely using recognition. We call these patches shape anchors, and we use them as the basis of a multi-view reconstruction system that transfers dense, complex geometry between scenes. We "anchor" our 3D interpretation from these patches, using them to predict geometry for parts of the scene that are relatively ambiguous. The resulting algorithm produces dense reconstructions from stereo point clouds that are sparse and noisy, and we demonstrate it on a challenging dataset of real-world, indoor scenes. |
Publication Date: | 2013 |
Citation: | Owens, Andrew, Jianxiong Xiao, Antonio Torralba, and William Freeman. "Shape Anchors for Data-Driven Multi-view Reconstruction." In IEEE International Conference on Computer Vision (2013): pp. 33-40. doi:10.1109/ICCV.2013.461 |
DOI: | 10.1109/ICCV.2013.461 |
Pages: | 33 - 40 |
Type of Material: | Conference Article |
Journal/Proceeding Title: | IEEE International Conference on Computer Vision |
Version: | Author's manuscript |
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.