Skip to main content

Semantic Scene Completion from a Single Depth Image

Author(s): Song, Shuran; Yu, Fisher; Zeng, Andy; Chang, Angel X; Savva, Manolis; et al

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr16c36
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSong, Shuran-
dc.contributor.authorYu, Fisher-
dc.contributor.authorZeng, Andy-
dc.contributor.authorChang, Angel X-
dc.contributor.authorSavva, Manolis-
dc.contributor.authorFunkhouser, Thomas-
dc.date.accessioned2021-10-08T19:49:59Z-
dc.date.available2021-10-08T19:49:59Z-
dc.date.issued2017en_US
dc.identifier.citationSong, Shuran, Fisher Yu, Andy Zeng, Angel X. Chang, Manolis Savva, and Thomas Funkhouser. "Semantic Scene Completion from a Single Depth Image." In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017): pp. 190-198. doi:10.1109/CVPR.2017.28en_US
dc.identifier.issn1063-6919-
dc.identifier.urihttps://arxiv.org/pdf/1611.08974v1.pdf-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr16c36-
dc.description.abstractThis paper focuses on semantic scene completion, a task for producing a complete 3D voxel representation of volumetric occupancy and semantic labels for a scene from a single-view depth map observation. Previous work has considered scene completion and semantic labeling of depth maps separately. However, we observe that these two problems are tightly intertwined. To leverage the coupled nature of these two tasks, we introduce the semantic scene completion network (SSCNet), an end-to-end 3D convolutional network that takes a single depth image as input and simultaneously outputs occupancy and semantic labels for all voxels in the camera view frustum. Our network uses a dilation-based 3D context module to efficiently expand the receptive field and enable 3D context learning. To train our network, we construct SUNCG - a manually created largescale dataset of synthetic 3D scenes with dense volumetric annotations. Our experiments demonstrate that the joint model outperforms methods addressing each task in isolation and outperforms alternative approaches on the semantic scene completion task. The dataset and code is available at http://sscnet.cs.princeton.edu.en_US
dc.format.extent190 - 198en_US
dc.language.isoen_USen_US
dc.relation.ispartofIEEE Conference on Computer Vision and Pattern Recognition (CVPR)en_US
dc.rightsAuthor's manuscripten_US
dc.titleSemantic Scene Completion from a Single Depth Imageen_US
dc.typeConference Articleen_US
dc.identifier.doi10.1109/CVPR.2017.28-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
SemanticSceneCompletionSingleDepthImage.pdf31.23 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.