To refer to this page use:
|Abstract:||© 2020 IEEE. The goal of this project is to learn a 3D shape representation that enables accurate surface reconstruction, compact storage, efficient computation, consistency for similar shapes, generalization across diverse shape categories, and inference from depth camera observations. Towards this end, we introduce Local Deep Implicit Functions (LDIF), a 3D shape representation that decomposes space into a structured set of learned implicit functions. We provide networks that infer the space decomposition and local deep implicit functions from a 3D mesh or posed depth image. During experiments, we find that it provides 10.3 points higher surface reconstruction accuracy (F-Score) than the state-of-The-Art (OccNet), while requiring fewer than 1\% of the network parameters. Experiments on posed depth image completion and generalization to unseen classes show 15.8 and 17.8 point improvements over the state-of-The-Art, while producing a structured 3D representation for each input with consistency across diverse shape collections.|
|Citation:||Genova, K, Cole, F, Sud, A, Sarna, A, Funkhouser, T. (2020). Local Deep Implicit Functions for 3D Shape. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 4856 - 4865. doi:10.1109/CVPR42600.2020.00491|
|Pages:||4856 - 4865|
|Type of Material:||Journal Article|
|Journal/Proceeding Title:||Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition|
|Version:||Final published version. This is an open access article.|
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.