Skip to main content

SymmetryNet: learning to predict reflectional and rotational symmetries of 3D shapes from single-view RGB-D images

Author(s): Shi, Yifei; Huang, Junwen; Zhang, Hongjia; Xu, Xin; Rusinkiewicz, Szymon; et al

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1gn9g
Full metadata record
DC FieldValueLanguage
dc.contributor.authorShi, Yifei-
dc.contributor.authorHuang, Junwen-
dc.contributor.authorZhang, Hongjia-
dc.contributor.authorXu, Xin-
dc.contributor.authorRusinkiewicz, Szymon-
dc.contributor.authorXu, Kai-
dc.date.accessioned2021-10-08T19:50:10Z-
dc.date.available2021-10-08T19:50:10Z-
dc.date.issued2020-12en_US
dc.identifier.citationShi, Yifei, Junwen Huang, Hongjia Zhang, Xin Xu, Szymon Rusinkiewicz, and Kai Xu. "SymmetryNet: learning to predict reflectional and rotational symmetries of 3D shapes from single-view RGB-D images." ACM Transactions on Graphics (TOG) 39, no. 6 (2020): pp. 1-14. doi:10.1145/3414685.3417775en_US
dc.identifier.issn0730-0301-
dc.identifier.urihttps://arxiv.org/pdf/2008.00485v1.pdf-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1gn9g-
dc.description.abstractWe study the problem of symmetry detection of 3D shapes from single-view RGB-D images, where severely missing data renders geometric detection approach infeasible. We propose an end-to-end deep neural network which is able to predict both reflectional and rotational symmetries of 3D objects present in the input RGB-D image. Directly training a deep model for symmetry prediction, however, can quickly run into the issue of overfitting. We adopt a multi-task learning approach. Aside from symmetry axis prediction, our network is also trained to predict symmetry correspondences. In particular, given the 3D points present in the RGB-D image, our network outputs for each 3D point its symmetric counterpart corresponding to a specific predicted symmetry. In addition, our network is able to detect for a given shape multiple symmetries of different types. We also contribute a benchmark of 3D symmetry detection based on single-view RGB-D images. Extensive evaluation on the benchmark demonstrates the strong generalization ability of our method, in terms of high accuracy of both symmetry axis prediction and counterpart estimation. In particular, our method is robust in handling unseen object instances with large variation in shape, multi-symmetry composition, as well as novel object categories.en_US
dc.format.extent1 - 14en_US
dc.language.isoen_USen_US
dc.relation.ispartofACM Transactions on Graphicsen_US
dc.rightsAuthor's manuscripten_US
dc.titleSymmetryNet: learning to predict reflectional and rotational symmetries of 3D shapes from single-view RGB-D imagesen_US
dc.typeJournal Articleen_US
dc.identifier.doi10.1145/3414685.3417775-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
SymmetryNet.pdf9.43 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.