Skip to main content

3D ShapeNets: A deep representation for volumetric shapes

Author(s): Wu, Zhirong; Song, Shuran; Khosla, Aditya; Yu, Fisher; Zhang, Linguang; et al

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1t267
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWu, Zhirong-
dc.contributor.authorSong, Shuran-
dc.contributor.authorKhosla, Aditya-
dc.contributor.authorYu, Fisher-
dc.contributor.authorZhang, Linguang-
dc.contributor.authorTang, Xiaoou-
dc.contributor.authorXiao, Jianxiong-
dc.date.accessioned2021-10-08T19:48:33Z-
dc.date.available2021-10-08T19:48:33Z-
dc.date.issued2015en_US
dc.identifier.citationWu, Zhirong, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. "3D ShapeNets: A deep representation for volumetric shapes." In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015). doi: 10.1109/CVPR.2015.7298801en_US
dc.identifier.issn1063-6919-
dc.identifier.urihttps://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Wu_3D_ShapeNets_A_2015_CVPR_paper.pdf-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1t267-
dc.description.abstract3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representation automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet - a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.en_US
dc.language.isoen_USen_US
dc.relation.ispartof2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)en_US
dc.rightsAuthor's manuscripten_US
dc.title3D ShapeNets: A deep representation for volumetric shapesen_US
dc.typeConference Articleen_US
dc.identifier.doi10.1109/CVPR.2015.7298801-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
3DShapeNetsDeepRepresentationVolumetricShapes.pdf2.45 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.