Skip to main content

Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation

Author(s): Wang, Zeyu; Qinami, Klint; Karakozis, Ioannis C; Genova, Kyle; Nair, Prem; et al

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1jg1z
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWang, Zeyu-
dc.contributor.authorQinami, Klint-
dc.contributor.authorKarakozis, Ioannis C-
dc.contributor.authorGenova, Kyle-
dc.contributor.authorNair, Prem-
dc.contributor.authorHata, Kenji-
dc.contributor.authorRussakovsky, Olga-
dc.date.accessioned2021-10-08T19:50:17Z-
dc.date.available2021-10-08T19:50:17Z-
dc.date.issued2020en_US
dc.identifier.citationWang, Zeyu, Klint Qinami, Ioannis Christos Karakozis, Kyle Genova, Prem Nair, Kenji Hata, and Olga Russakovsky. "Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation." In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020): pp. 8916-8925. doi:10.1109/CVPR42600.2020.00894en_US
dc.identifier.issn1063-6919-
dc.identifier.urihttps://openaccess.thecvf.com/content_CVPR_2020/papers/Wang_Towards_Fairness_in_Visual_Recognition_Effective_Strategies_for_Bias_Mitigation_CVPR_2020_paper.pdf-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1jg1z-
dc.description.abstractComputer vision models learn to perform a task by capturing relevant statistics from training data. It has been shown that models learn spurious age, gender, and race correlations when trained for seemingly unrelated tasks like activity recognition or image captioning. Various mitigation techniques have been presented to prevent models from utilizing or learning such biases. However, there has been little systematic comparison between these techniques. We design a simple but surprisingly effective visual recognition benchmark for studying bias mitigation. Using this benchmark, we provide a thorough analysis of a wide range of techniques. We highlight the shortcomings of popular adversarial training approaches for bias mitigation, propose a simple but similarly effective alternative to the inference-time Reducing Bias Amplification method of Zhao et al., and design a domain-independent training technique that outperforms all other methods. Finally, we validate our findings on the attribute classification task in the CelebA dataset, where attribute presence is known to be correlated with the gender of people in the image, and demonstrate that the proposed technique is effective at mitigating real-world gender bias.en_US
dc.format.extent8916 - 8925en_US
dc.language.isoen_USen_US
dc.relation.ispartofIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)en_US
dc.rightsAuthor's manuscripten_US
dc.titleTowards Fairness in Visual Recognition: Effective Strategies for Bias Mitigationen_US
dc.typeConference Articleen_US
dc.identifier.doi10.1109/CVPR42600.2020.00894-
dc.identifier.eissn2575-7075-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
FairnessVisualRecog.pdf1.44 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.