To refer to this page use:
|Abstract:||Cameras that capture color and depth information have become an essential imaging modality for applications in robotics, autonomous driving, virtual, and augmented reality. Existing RGB-D cameras rely on multiple sensors or active illumination with specialized sensors. In this work, we propose a method for monocular single-shot RGB-D imaging. Instead of learning depth from single-image depth cues, we revisit double-refraction imaging using a birefractive medium, measuring depth as the displacement of differently refracted images superimposed in a single capture. However, existing double-refraction methods are orders of magnitudes too slow to be used in real-time applications, e.g., in robotics, and provide only inaccurate depth due to correspondence ambiguity in double reflection. We resolve this ambiguity optically by leveraging the orthogonality of the two linearly polarized rays in double refraction -- introducing uneven double refraction by adding a linear polarizer to the birefractive medium. Doing so makes it possible to develop a real-time method for reconstructing sparse depth and color simultaneously in real-time. We validate the proposed method, both synthetically and experimentally, and demonstrate 3D object detection and photographic applications.|
|Citation:||Meuleman, Andreas, Seung-Hwan Baek, Felix Heide, and Min H. Kim. "Single-Shot Monocular RGB-D Imaging Using Uneven Double Refraction." In IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020): pp. 2462-2471. doi:10.1109/CVPR42600.2020.00254|
|Pages:||2462 - 2471|
|Type of Material:||Conference Article|
|Journal/Proceeding Title:||Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition|
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.