To refer to this page use:
|Abstract:||The goal of our work is to complete the depth channel of an RGB-D image. Commodity-grade depth cameras often fail to sense depth for shiny, bright, transparent, and distant surfaces. To address this problem, we train a deep network that takes an RGB image as input and predicts dense surface normals and occlusion boundaries. Those predictions are then combined with raw depth observations provided by the RGB-D camera to solve for depths for all pixels, including those missing in the original observation. This method was chosen over others (e.g., inpainting depths directly) as the result of extensive experiments with a new depth completion benchmark dataset, where holes are filled in training data through the rendering of surface reconstructions created from multiview RGB-D scans. Experiments with different network inputs, depth representations, loss functions, optimization methods, inpainting methods, and deep depth estimation networks show that our proposed approach provides better depth completions than these alternatives.|
|Citation:||Zhang, Yinda, and Thomas Funkhouser. "Deep Depth Completion of a Single RGB-D Image." In IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018): pp. 175-185. doi:10.1109/CVPR.2018.00026|
|Pages:||175 - 185|
|Type of Material:||Conference Article|
|Journal/Proceeding Title:||IEEE/CVF Conference on Computer Vision and Pattern Recognition|
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.