Skip to main content

Im2Pano3D: Extrapolating 360° Structure and Semantics Beyond the Field of View

Author(s): Song, Shuran; Zeng, Andy; Chang, Angel X; Savva, Manolis; Savarese, Silvio; et al

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr13r70
Abstract: We present Im2Pano3D, a convolutional neural network that generates a dense prediction of 3D structure and a probability distribution of semantic labels for a full 360° panoramic view of an indoor scene when given only a partial observation (= 50%) in the form of an RGB-D image. To make this possible, Im2Pano3D leverages strong contextual priors learned from large-scale synthetic and real-world indoor scenes. To ease the prediction of 3D structure, we propose to parameterize 3D surfaces with their plane equations and train the model to predict these parameters directly. To provide meaningful training supervision, we use multiple loss functions that consider both pixel level accuracy and global context consistency. Experiments demonstrate that Im2Pano3D is able to predict the semantics and 3D structure of the unobserved scene with more than 56% pixel accuracy and less than 0.52m average distance error, which is significantly better than alternative approaches.
Publication Date: 2018
Citation: Song, Shuran, Andy Zeng, Angel X. Chang, Manolis Savva, Silvio Savarese, and Thomas Funkhouser. "Im2Pano3D: Extrapolating 360° Structure and Semantics Beyond the Field of View." In IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018): pp. 3847-3856. doi:10.1109/CVPR.2018.00405
DOI: 10.1109/CVPR.2018.00405
ISSN: 1063-6919
EISSN: 2575-7075
Pages: 3847 - 3856
Type of Material: Conference Article
Journal/Proceeding Title: IEEE/CVF Conference on Computer Vision and Pattern Recognition
Version: Author's manuscript



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.