To refer to this page use:
|Abstract:||Computer vision models learn to perform a task by capturing relevant statistics from training data. It has been shown that models learn spurious age, gender, and race correlations when trained for seemingly unrelated tasks like activity recognition or image captioning. Various mitigation techniques have been presented to prevent models from utilizing or learning such biases. However, there has been little systematic comparison between these techniques. We design a simple but surprisingly effective visual recognition benchmark for studying bias mitigation. Using this benchmark, we provide a thorough analysis of a wide range of techniques. We highlight the shortcomings of popular adversarial training approaches for bias mitigation, propose a simple but similarly effective alternative to the inference-time Reducing Bias Amplification method of Zhao et al., and design a domain-independent training technique that outperforms all other methods. Finally, we validate our findings on the attribute classification task in the CelebA dataset, where attribute presence is known to be correlated with the gender of people in the image, and demonstrate that the proposed technique is effective at mitigating real-world gender bias.|
|Citation:||Wang, Zeyu, Klint Qinami, Ioannis Christos Karakozis, Kyle Genova, Prem Nair, Kenji Hata, and Olga Russakovsky. "Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation." In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020): pp. 8916-8925. doi:10.1109/CVPR42600.2020.00894|
|Pages:||8916 - 8925|
|Type of Material:||Conference Article|
|Journal/Proceeding Title:||IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)|
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.