Skip to main content

Generative Modeling for Small-Data Object Detection

Author(s): Liu, Lanlan; Muelly, Michael; Deng, Jia; Pfister, Tomas; Li, Li-Jia

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1tn8h
Abstract: This paper explores object detection in the small data regime, where only a limited number of annotated bounding boxes are available due to data rarity and annotation expense. This is a common challenge today with machine learning being applied to many new tasks where obtaining training data is more challenging, e.g. in medical images with rare diseases that doctors sometimes only see once in their life-time. In this work we explore this problem from a generative modeling perspective by learning to generate new images with associated bounding boxes, and using these for training an object detector. We show that simply training previously proposed generative models does not yield satisfactory performance due to them optimizing for image realism rather than object detection accuracy. To this end we develop a new model with a novel unrolling mechanism that jointly optimizes the generative model and a detector such that the generated images improve the performance of the detector. We show this method outperforms the state of the art on two challenging datasets, disease detection and small data pedestrian detection, improving the average precision on NIH Chest X-ray by a relative 20% and localization accuracy by a relative 50%.
Publication Date: 2019
Citation: Liu, Lanlan, Michael Muelly, Jia Deng, Tomas Pfister, and Li-Jia Li. "Generative Modeling for Small-Data Object Detection." Proceedings of the IEEE International Conference on Computer Vision 1 (2019), pp. 6072-6080. doi:10.1109/ICCV.2019.00617
DOI: 10.1109/ICCV.2019.00617
ISSN: 1550-5499
Pages: 6072 - 6080
Type of Material: Conference Article
Journal/Proceeding Title: Proceedings of the IEEE International Conference on Computer Vision
Version: Author's manuscript



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.