Skip to main content

Learning to Generate 3D Training Data Through Hybrid Gradient

Author(s): Yang, Dawei; Deng, Jia

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1sr89
Abstract: Synthetic images rendered by graphics engines are a promising source for training deep networks. However, it is challenging to ensure that they can help train a network to perform well on real images, because a graphics-based generation pipeline requires numerous design decisions such as the selection of 3D shapes and the placement of the camera. In this work, we propose a new method that optimizes the generation of 3D training data based on what we call "hybrid gradient". We parametrize the design decisions as a real vector, and combine the approximate gradient and the analytical gradient to obtain the hybrid gradient of the network performance with respect to this vector. We evaluate our approach on the task of estimating surface normal, depth or intrinsic decomposition from a single image. Experiments on standard benchmarks show that our approach can outperform the prior state of the art on optimizing the generation of 3D training data, particularly in terms of computational efficiency.
Publication Date: 2020
Citation: Yang, Dawei, and Jia Deng. "Learning to Generate 3D Training Data through Hybrid Gradient." IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020): pp. 776-786. doi:10.1109/CVPR42600.2020.00086
DOI: 10.1109/CVPR42600.2020.00086
ISSN: 1063-6919
EISSN: 2575-7075
Pages: 776 - 786
Type of Material: Conference Article
Journal/Proceeding Title: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Version: Author's manuscript



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.