Skip to main content

Human uncertainty makes classification more robust

Author(s): Peterson, Joshua; Battleday, Ruairidh; Griffiths, Thomas; Russakovsky, Olga

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1gc0q
Abstract: The classification performance of deep neural networks has begun to asymptote at near-perfect levels. However, their ability to generalize outside the training set and their robustness to adversarial attacks have not. In this paper , we make progress on this problem by training with full label distributions that reflect human perceptual uncertainty. We first present a new benchmark dataset which we call CIFAR10H, containing a full distribution of human labels for each image of the CIFAR10 test set. We then show that, while contemporary classifiers fail to exhibit human-like uncertainty on their own, explicit training on our dataset closes this gap, supports improved generalization to increasingly out-of-training-distribution test datasets, and confers robustness to adversarial attacks.
Publication Date: 2019
Citation: Peterson, Joshua, Battleday, Ruairidh, Griffiths, Thomas, and Russakovsky, Olga. (2019). Human uncertainty makes classification more robust. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9617-9626.
Pages: 9617 - 9626
Type of Material: Conference Article
Journal/Proceeding Title: Proceedings of the IEEE International Conference on Computer Vision
Version: Author's manuscript



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.