Skip to main content

ActiveStereoNet: End-to-End Self-supervised Learning for Active Stereo Systems

Author(s): Zhang, Yinda; Khamis, Sameh; Rhemann, Christoph; Valentin, Julien; Kowdle, Adarsh; et al

To refer to this page use:
Abstract: In this paper we present ActiveStereoNet, the first deep learning solution for active stereo systems. Due to the lack of ground truth, our method is fully self-supervised, yet it produces precise depth with a subpixel precision of 1 / 30th of a pixel; it does not suffer from the common over-smoothing issues; it preserves the edges; and it explicitly handles occlusions. We introduce a novel reconstruction loss that is more robust to noise and texture-less patches, and is invariant to illumination changes. The proposed loss is optimized using a window-based cost aggregation with an adaptive support weight scheme. This cost aggregation is edge-preserving and smooths the loss function, which is key to allow the network to reach compelling results. Finally we show how the task of predicting invalid regions, such as occlusions, can be trained end-to-end without ground-truth. This component is crucial to reduce blur and particularly improves predictions along depth discontinuities. Extensive quantitatively and qualitatively evaluations on real and synthetic data demonstrate state of the art results in many challenging scenes.
Publication Date: 2018
Citation: Zhang, Yinda, Sameh Khamis, Christoph Rhemann, Julien Valentin, Adarsh Kowdle, Vladimir Tankovich, Michael Schoenberg, Shahram Izadi, Thomas Funkhouser, and Sean Fanello. "ActiveStereoNet: End-to-End Self-supervised Learning for Active Stereo Systems." In European Conference on Computer Vision (ECCV) (2018): pp. 802-819. doi: 10.1007/978-3-030-01237-3_48
DOI: 10.1007/978-3-030-01237-3_48
ISSN: 0302-9743
EISSN: 1611-3349
Pages: 802 - 819
Type of Material: Conference Article
Journal/Proceeding Title: European Conference on Computer Vision (ECCV)
Version: Author's manuscript

Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.