To refer to this page use:
|Abstract:||Many applications in 3D shape design and augmentation require the ability to make specific edits to an object's semantic parameters (e.g., the pose of a person's arm or the length of an airplane's wing) while preserving as much existing details as possible. We propose to learn a deep network that infers the semantic parameters of an input shape and then allows the user to manipulate those parameters. The network is trained jointly on shapes from an auxiliary synthetic template and unlabeled realistic models, ensuring robustness to shape variability while relieving the need to label realistic exemplars. At testing time, edits within the parameter space drive deformations to be applied to the original shape, which provides semantically-meaningful manipulation while preserving the details. This is in contrast to prior methods that either use autoencoders with a limited latent-space dimensionality, failing to preserve arbitrary detail, or drive deformations with purely-geometric controls, such as cages, losing the ability to update local part regions. Experiments with datasets of chairs, airplanes, and human bodies demonstrate that our method produces more natural edits than prior work.|
|Citation:||Wei, Fangyin, Elena Sizikova, Avneesh Sud, Szymon Rusinkiewicz, and Thomas Funkhouser. "Learning to Infer Semantic Parameters for 3D Shape Editing." In International Conference on 3D Vision (3DV) (2020): pp. 434-442. doi:10.1109/3DV50981.2020.00053|
|Pages:||434 - 442|
|Type of Material:||Conference Article|
|Journal/Proceeding Title:||International Conference on 3D Vision (3DV)|
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.