Skip to main content

Learning to Infer Semantic Parameters for 3D Shape Editing

Author(s): Wei, Fangyin; Sizikova, Elena; Sud, Avneesh; Rusinkiewicz, Szymon; Funkhouser, Thomas

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr13z8p
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWei, Fangyin-
dc.contributor.authorSizikova, Elena-
dc.contributor.authorSud, Avneesh-
dc.contributor.authorRusinkiewicz, Szymon-
dc.contributor.authorFunkhouser, Thomas-
dc.date.accessioned2021-10-08T19:50:53Z-
dc.date.available2021-10-08T19:50:53Z-
dc.date.issued2020en_US
dc.identifier.citationWei, Fangyin, Elena Sizikova, Avneesh Sud, Szymon Rusinkiewicz, and Thomas Funkhouser. "Learning to Infer Semantic Parameters for 3D Shape Editing." In International Conference on 3D Vision (3DV) (2020): pp. 434-442. doi:10.1109/3DV50981.2020.00053en_US
dc.identifier.issn2378-3826-
dc.identifier.urihttps://arxiv.org/pdf/2011.04755.pdf-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr13z8p-
dc.description.abstractMany applications in 3D shape design and augmentation require the ability to make specific edits to an object's semantic parameters (e.g., the pose of a person's arm or the length of an airplane's wing) while preserving as much existing details as possible. We propose to learn a deep network that infers the semantic parameters of an input shape and then allows the user to manipulate those parameters. The network is trained jointly on shapes from an auxiliary synthetic template and unlabeled realistic models, ensuring robustness to shape variability while relieving the need to label realistic exemplars. At testing time, edits within the parameter space drive deformations to be applied to the original shape, which provides semantically-meaningful manipulation while preserving the details. This is in contrast to prior methods that either use autoencoders with a limited latent-space dimensionality, failing to preserve arbitrary detail, or drive deformations with purely-geometric controls, such as cages, losing the ability to update local part regions. Experiments with datasets of chairs, airplanes, and human bodies demonstrate that our method produces more natural edits than prior work.en_US
dc.format.extent434 - 442en_US
dc.relation.ispartofInternational Conference on 3D Vision (3DV)en_US
dc.rightsAuthor's manuscripten_US
dc.titleLearning to Infer Semantic Parameters for 3D Shape Editingen_US
dc.typeConference Articleen_US
dc.identifier.doi10.1109/3DV50981.2020.00053-
dc.identifier.eissn2475-7888-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
LearnSemanticParam.pdf35.37 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.