Skip to main content

Physically-Based Rendering for Indoor Scene Understanding Using Convolutional Neural Networks

Author(s): Zhang, Yinda; Song, Shuran; Yumer, Ersin; Savva, Manolis; Lee, Joon-Young; et al

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1pk02
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZhang, Yinda-
dc.contributor.authorSong, Shuran-
dc.contributor.authorYumer, Ersin-
dc.contributor.authorSavva, Manolis-
dc.contributor.authorLee, Joon-Young-
dc.contributor.authorJin, Hailin-
dc.contributor.authorFunkhouser, Thomas-
dc.date.accessioned2021-10-08T19:49:46Z-
dc.date.available2021-10-08T19:49:46Z-
dc.date.issued2017en_US
dc.identifier.citationZhang, Yinda, Shuran Song, Ersin Yumer, Manolis Savva, Joon-Young Lee, Hailin Jin, and Thomas Funkhouser. "Physically-Based Rendering for Indoor Scene Understanding Using Convolutional Neural Networks." In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017): pp. 5057-5065. doi:10.1109/CVPR.2017.537en_US
dc.identifier.issn1063-6919-
dc.identifier.urihttps://openaccess.thecvf.com/content_cvpr_2017/papers/Zhang_Physically-Based_Rendering_for_CVPR_2017_paper.pdf-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1pk02-
dc.description.abstractIndoor scene understanding is central to applications such as robot navigation and human companion assistance. Over the last years, data-driven deep neural networks have outperformed many traditional approaches thanks to their representation learning capabilities. One of the bottlenecks in training for better representations is the amount of available per-pixel ground truth data that is required for core scene understanding tasks such as semantic segmentation, normal prediction, and object boundary detection. To address this problem, a number of works proposed using synthetic data. However, a systematic study of how such synthetic data is generated is missing. In this work, we introduce a large-scale synthetic dataset with 500K physically-based rendered images from 45K realistic 3D indoor scenes. We study the effects of rendering methods and scene lighting on training for three computer vision tasks: surface normal prediction, semantic segmentation, and object boundary detection. This study provides insights into the best practices for training with synthetic data (more realistic rendering is worth it) and shows that pretraining with our new synthetic dataset can improve results beyond the current state of the art on all three tasks.en_US
dc.format.extent5057 - 5065en_US
dc.language.isoen_USen_US
dc.relation.ispartofIEEE Conference on Computer Vision and Pattern Recognitionen_US
dc.rightsAuthor's manuscripten_US
dc.titlePhysically-Based Rendering for Indoor Scene Understanding Using Convolutional Neural Networksen_US
dc.typeConference Articleen_US
dc.identifier.doi10.1109/CVPR.2017.537-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
RenderIndoorSceneUnderstandCNN.pdf4.92 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.