Skip to main content

Shape Estimation from Shading, Defocus, and Correspondence Using Light-Field Angular Coherence

Author(s): Tao, Michael W; Srinivasan, Pratul P; Hadap, Sunil; Rusinkiewicz, Szymon; Malik, Jitendra; et al

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1k82f
Full metadata record
DC FieldValueLanguage
dc.contributor.authorTao, Michael W-
dc.contributor.authorSrinivasan, Pratul P-
dc.contributor.authorHadap, Sunil-
dc.contributor.authorRusinkiewicz, Szymon-
dc.contributor.authorMalik, Jitendra-
dc.contributor.authorRamamoorthi, Ravi-
dc.date.accessioned2021-10-08T19:47:16Z-
dc.date.available2021-10-08T19:47:16Z-
dc.date.issued2017-03en_US
dc.identifier.citationTao, Michael W., Pratul P. Srinivasan, Sunil Hadap, Szymon Rusinkiewicz, Jitendra Malik, and Ravi Ramamoorthi. "Shape Estimation from Shading, Defocus, and Correspondence Using Light-Field Angular Coherence." IEEE Transactions on Pattern Analysis and Machine Intelligence 39, no. 3 (2017): 546-560. doi:10.1109/TPAMI.2016.2554121en_US
dc.identifier.issn0162-8828-
dc.identifier.urihttp://cseweb.ucsd.edu/~ravir/normals_PAMI.pdf-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1k82f-
dc.description.abstractLight-field cameras are quickly becoming commodity items, with consumer and industrial applications. They capture many nearby views simultaneously using a single image with a micro-lens array, thereby providing a wealth of cues for depth recovery: defocus, correspondence, and shading. In particular, apart from conventional image shading, one can refocus images after acquisition, and shift one's viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. We present a principled algorithm for dense depth estimation that combines defocus and correspondence metrics. We then extend our analysis to the additional cue of shading, using it to refine fine details in the shape. By exploiting an all-in-focus image, in which pixels are expected to exhibit angular coherence, we define an optimization framework that integrates photo consistency, depth consistency, and shading consistency. We show that combining all three sources of information: defocus, correspondence, and shading, outperforms state-of-the-art light-field depth estimation algorithms in multiple scenarios.en_US
dc.format.extent546 - 560en_US
dc.language.isoen_USen_US
dc.relation.ispartofIEEE Transactions on Pattern Analysis and Machine Intelligenceen_US
dc.rightsAuthor's manuscripten_US
dc.titleShape Estimation from Shading, Defocus, and Correspondence Using Light-Field Angular Coherenceen_US
dc.typeJournal Articleen_US
dc.identifier.doi10.1109/TPAMI.2016.2554121-
dc.identifier.eissn1939-3539-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
ShapeEstimationShadingDefocusCorrespondenceLightFieldAngularCoherence.pdf65.3 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.