Skip to main content

FrameNet: Learning Local Canonical Frames of 3D Surfaces From a Single RGB Image

Author(s): Huang, Jingwei; Zhou, Yichao; Funkhouser, Thomas; Guibas, Leonidas

To refer to this page use:
Abstract: In this work, we introduce the novel problem of identifying dense canonical 3D coordinate frames from a single RGB image. We observe that each pixel in an image corresponds to a surface in the underlying 3D geometry, where a canonical frame can be identified as represented by three orthogonal axes, one along its normal direction and two in its tangent plane. We propose an algorithm to predict these axes from RGB. Our first insight is that canonical frames computed automatically with recently introduced direction field synthesis methods can provide training data for the task. Our second insight is that networks designed for surface normal prediction provide better results when trained jointly to predict canonical frames, and even better when trained to also predict 2D projections of canonical frames. We conjecture this is because projections of canonical tangent directions often align with local gradients in images, and because those directions are tightly linked to 3Dcanonical frames through projective geometry and orthogonality constraints. In our experiments, we find that our method predicts 3D canonical frames that can be used in applications ranging from surface normal estimation, feature matching, and augmented reality.
Publication Date: 2019
Citation: Huang, Jingwei, Yichao Zhou, Thomas Funkhouser, and Leonidas J. Guibas. "FrameNet: Learning Local Canonical Frames of 3D Surfaces From a Single RGB Image." In IEEE/CVF International Conference on Computer Vision (ICCV) (2019): pp. 8637-8646. doi:10.1109/ICCV.2019.00873
DOI: 10.1109/ICCV.2019.00873
ISSN: 1550-5499
EISSN: 2380-7504
Pages: 8637 - 8646
Type of Material: Conference Article
Journal/Proceeding Title: IEEE/CVF International Conference on Computer Vision (ICCV)
Version: Author's manuscript

Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.