To refer to this page use:
|Abstract:||3D context has been shown to be extremely important for scene understanding, yet very little research has been done on integrating context information with deep neural network architectures. This paper presents an approach to embed 3D context into the topology of a neural network trained to perform holistic scene understanding. Given a depth image depicting a 3D scene, our network aligns the observed scene with a predefined 3D scene template, and then reasons about the existence and location of each object within the scene template. In doing so, our model recognizes multiple objects in a single forward pass of a 3D convolutional neural network, capturing both global scene and local object information simultaneously. To create training data for this 3D network, we generate partially synthetic depth images which are rendered by replacing real objects with a repository of CAD models of the same object category1. Extensive experiments demonstrate the effectiveness of our algorithm compared to the state of the art.|
|Citation:||Zhang, Yinda, Mingru Bai, Pushmeet Kohli, Shahram Izadi, and Jianxiong Xiao. "DeepContext: Context-Encoding Neural Pathways for 3D Holistic Scene Understanding." In IEEE International Conference on Computer Vision (ICCV). (2017): pp. 1201-1210. doi:10.1109/ICCV.2017.135|
|Pages:||1201 - 1210|
|Type of Material:||Conference Article|
|Journal/Proceeding Title:||IEEE International Conference on Computer Vision (ICCV)|
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.