Skip to main content

DeepVoxels: Learning Persistent 3D Feature Embeddings

Author(s): Sitzmann, Vincent; Thies, Justus; Heide, Felix; Niebner, Matthias; Wetzstein, Gordon; et al

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr13j9j
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSitzmann, Vincent-
dc.contributor.authorThies, Justus-
dc.contributor.authorHeide, Felix-
dc.contributor.authorNiebner, Matthias-
dc.contributor.authorWetzstein, Gordon-
dc.contributor.authorZollhofer, Michael-
dc.date.accessioned2021-10-08T19:46:43Z-
dc.date.available2021-10-08T19:46:43Z-
dc.date.issued2019en_US
dc.identifier.citationSitzmann, Vincent, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, and Michael Zollhofer. "DeepVoxels: Learning Persistent 3D Feature Embeddings." In IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019): pp. 2432-2441: doi:10.1109/CVPR.2019.00254.en_US
dc.identifier.issn1063-6919-
dc.identifier.urihttps://openaccess.thecvf.com/content_CVPR_2019/papers/Sitzmann_DeepVoxels_Learning_Persistent_3D_Feature_Embeddings_CVPR_2019_paper.pdf-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr13j9j-
dc.description.abstractIn this work, we address the lack of 3D understanding of generative neural networks by introducing a persistent 3D feature embedding for view synthesis. To this end, we propose DeepVoxels, a learned representation that encodes the view-dependent appearance of a 3D scene without having to explicitly model its geometry. At its core, our approach is based on a Cartesian 3D grid of persistent embedded features that learn to make use of the underlying 3D scene structure. Our approach combines insights from 3D geometric computer vision with recent advances in learning image-to-image mappings based on adversarial loss functions. DeepVoxels is supervised, without requiring a 3D reconstruction of the scene, using a 2D re-rendering loss and enforces perspective and multi-view geometry in a principled manner. We apply our persistent 3D scene representation to the problem of novel view synthesis demonstrating high-quality results for a variety of challenging scenes.en_US
dc.format.extent2432 - 2441en_US
dc.language.isoen_USen_US
dc.relation.ispartofProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognitionen_US
dc.rightsAuthor's manuscripten_US
dc.titleDeepVoxels: Learning Persistent 3D Feature Embeddingsen_US
dc.typeConference Articleen_US
dc.identifier.doi10.1109/CVPR.2019.00254-
dc.identifier.eissn2575-7075-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
Persistent3DFeatureEmbeddings.pdf1.74 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.