3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions
Author(s): Zeng, Andy; Song, Shuran; Niebner, Matthias; Fisher, Matthew; Xiao, Jianxiong; et al
DownloadTo refer to this page use:
http://arks.princeton.edu/ark:/88435/pr1m836
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zeng, Andy | - |
dc.contributor.author | Song, Shuran | - |
dc.contributor.author | Niebner, Matthias | - |
dc.contributor.author | Fisher, Matthew | - |
dc.contributor.author | Xiao, Jianxiong | - |
dc.contributor.author | Funkhouser, Thomas | - |
dc.date.accessioned | 2021-10-08T19:50:25Z | - |
dc.date.available | 2021-10-08T19:50:25Z | - |
dc.date.issued | 2017 | en_US |
dc.identifier.citation | Zeng, Andy, Shuran Song, Matthias Nießner, Matthew Fisher, Jianxiong Xiao, and Thomas Funkhouser. "3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions." In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017): pp. 199-208. doi:10.1109/CVPR.2017.29 | en_US |
dc.identifier.issn | 1063-6919 | - |
dc.identifier.uri | https://openaccess.thecvf.com/content_cvpr_2017/papers/Zeng_3DMatch_Learning_Local_CVPR_2017_paper.pdf | - |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/pr1m836 | - |
dc.description.abstract | Matching local geometric features on real-world depth images is a challenging task due to the noisy, low-resolution, and incomplete nature of 3D scan data. These difficulties limit the performance of current state-of-art methods, which are typically based on histograms over geometric properties. In this paper, we present 3DMatch, a data-driven model that learns a local volumetric patch descriptor for establishing correspondences between partial 3D data. To amass training data for our model, we propose a self-supervised feature learning method that leverages the millions of correspondence labels found in existing RGB-D reconstructions. Experiments show that our descriptor is not only able to match local geometry in new scenes for reconstruction, but also generalize to different tasks and spatial scales (e.g. instance-level object model alignment for the Amazon Picking Challenge, and mesh surface correspondence). Results show that 3DMatch consistently outperforms other state-of-the-art approaches by a significant margin. Code, data, benchmarks, and pre-trained models are available online at http://3dmatch.cs.princeton.edu. | en_US |
dc.format.extent | 199 - 208 | en_US |
dc.language.iso | en_US | en_US |
dc.relation.ispartof | IEEE Conference on Computer Vision and Pattern Recognition (CVPR) | en_US |
dc.rights | Author's manuscript | en_US |
dc.title | 3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions | en_US |
dc.type | Conference Article | en_US |
dc.identifier.doi | 10.1109/CVPR.2017.29 | - |
pu.type.symplectic | http://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceeding | en_US |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
LearningLocal.pdf | 1.75 MB | Adobe PDF | View/Download |
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.