Skip to main content

Depth from shading, defocus, and correspondence using light-field angular coherence

Author(s): Tao, MW; Srinivasan, PP; Malik, J; Rusinkiewicz, Syzmon; Ramamoorthi, R

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1jh4k
Full metadata record
DC FieldValueLanguage
dc.contributor.authorTao, MW-
dc.contributor.authorSrinivasan, PP-
dc.contributor.authorMalik, J-
dc.contributor.authorRusinkiewicz, Syzmon-
dc.contributor.authorRamamoorthi, R-
dc.date.accessioned2018-07-20T15:09:31Z-
dc.date.available2018-07-20T15:09:31Z-
dc.date.issued2015-10-15en_US
dc.identifier.citationTao, MW, Srinivasan, PP, Malik, J, Rusinkiewicz, S, Ramamoorthi, R. (2015). Depth from shading, defocus, and correspondence using light-field angular coherence. 07-12-June-2015 (1940 - 1948. doi:10.1109/CVPR.2015.7298804en_US
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1jh4k-
dc.description.abstractLight-field cameras are now used in consumer and industrial applications. Recent papers and products have demonstrated practical depth recovery algorithms from a passive single-shot capture. However, current light-field capture devices have narrow baselines and constrained spatial resolution; therefore, the accuracy of depth recovery is limited, requiring heavy regularization and producing planar depths that do not resemble the actual geometry. Using shading information is essential to improve the shape estimation. We develop an improved technique for local shape estimation from defocus and correspondence cues, and show how shading can be used to further refine the depth. Light-field cameras are able to capture both spatial and angular data, suitable for refocusing. By locally refocusing each spatial pixel to its respective estimated depth, we produce an all-in-focus image where all viewpoints converge onto a point in the scene. Therefore, the angular pixels have angular coherence, which exhibits three properties: photo consistency, depth consistency, and shading consistency. We propose a new framework that uses angular coherence to optimize depth and shading. The optimization framework estimates both general lighting in natural scenes and shading to improve depth regularization. Our method outperforms current state-of-the-art light-field depth estimation algorithms in multiple scenarios, including real images.en_US
dc.format.extent1940 - 1948en_US
dc.language.isoen_USen_US
dc.relation.ispartofProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognitionen_US
dc.rightsFinal published version. This is an open access article.en_US
dc.titleDepth from shading, defocus, and correspondence using light-field angular coherenceen_US
dc.typeConference Articleen_US
dc.identifier.doidoi:10.1109/CVPR.2015.7298804-
dc.date.eissued2015en_US
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
Depth from shading, defocus, and correspondence using light-field angular coherence.pdf8.92 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.