To refer to this page use:
|Abstract:||We study the problem of symmetry detection of 3D shapes from single-view RGB-D images, where severely missing data renders geometric detection approach infeasible. We propose an end-to-end deep neural network which is able to predict both reflectional and rotational symmetries of 3D objects present in the input RGB-D image. Directly training a deep model for symmetry prediction, however, can quickly run into the issue of overfitting. We adopt a multi-task learning approach. Aside from symmetry axis prediction, our network is also trained to predict symmetry correspondences. In particular, given the 3D points present in the RGB-D image, our network outputs for each 3D point its symmetric counterpart corresponding to a specific predicted symmetry. In addition, our network is able to detect for a given shape multiple symmetries of different types. We also contribute a benchmark of 3D symmetry detection based on single-view RGB-D images. Extensive evaluation on the benchmark demonstrates the strong generalization ability of our method, in terms of high accuracy of both symmetry axis prediction and counterpart estimation. In particular, our method is robust in handling unseen object instances with large variation in shape, multi-symmetry composition, as well as novel object categories.|
|Citation:||Shi, Yifei, Junwen Huang, Hongjia Zhang, Xin Xu, Szymon Rusinkiewicz, and Kai Xu. "SymmetryNet: learning to predict reflectional and rotational symmetries of 3D shapes from single-view RGB-D images." ACM Transactions on Graphics (TOG) 39, no. 6 (2020): pp. 1-14. doi:10.1145/3414685.3417775|
|Pages:||1 - 14|
|Type of Material:||Journal Article|
|Journal/Proceeding Title:||ACM Transactions on Graphics|
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.