To refer to this page use:
|Abstract:||Progress in scene understanding requires reasoning about the rich and diverse visual environments that make up our daily experience. To this end, we propose the Scene Understanding database, a nearly exhaustive collection of scenes categorized at the same level of specificity as human discourse. The database contains 908 distinct scene categories and 131,072 images. Given this data with both scene and object labels available, we perform in-depth analysis of co-occurrence statistics and the contextual relationship. To better understand this large scale taxonomy of scene categories, we perform two human experiments: we quantify human scene recognition accuracy, and we measure how typical each image is of its assigned scene category. Next, we perform computational experiments: scene recognition with global image features, indoor versus outdoor classification, and “scene detection,” in which we relax the assumption that one image depicts only one scene category. Finally, we relate human experiments to machine performance and explore the relationship between human and machine recognition errors and the relationship between image “typicality” and machine recognition accuracy.|
|Citation:||Xiao, Jianxiong, Krista A. Ehinger, James Hays, Antonio Torralba, and Aude Oliva. "SUN Database: Exploring a Large Collection of Scene Categories." International Journal of Computer Vision 119, no. 1 (2016): 3-22. doi:10.1007/s11263-014-0748-y|
|Pages:||3 - 22|
|Type of Material:||Journal Article|
|Journal/Proceeding Title:||International Journal of Computer Vision|
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.