Skip to main content

Enhancing Fairness of Visual Attribute Predictors

  • Conference paper
  • First Online:
Computer Vision – ACCV 2022 (ACCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13846))

Included in the following conference series:

  • 297 Accesses

Abstract

The performance of deep neural networks for image recognition tasks such as predicting a smiling face is known to degrade with under-represented classes of sensitive attributes. We address this problem by introducing fairness-aware regularization losses based on batch estimates of Demographic Parity, Equalized Odds, and a novel Intersection-over-Union measure. The experiments performed on facial and medical images from CelebA, UTKFace, and the SIIM-ISIC Melanoma classification challenge show the effectiveness of our proposed fairness losses on bias mitigation as they improve model fairness while maintaining high classification performance. To the best of our knowledge, our work is the first attempt to incorporate these types of losses in an end-to-end training scheme to mitigate biases of visual attribute predictors.

Source code is available at https://github.com/nish03/FVAP.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Strictly speaking, it directly holds only if \(p_\theta (y^*_s)\) is uniform, otherwise (3) corresponds to a squared difference between \(p_\theta (y_t,y^*_s)\) and \(p_\theta (y_t)\cdot p_\theta (y^*_s)\), where addends are additionally weighted by \(1/p_\theta (y^*_s)^2\).

  2. 2.

    We also assume that probabilities are differentiable w.r.t. parameters.

  3. 3.

    We discuss in detail only the case of binary variables and the IoU-loss for simplicity. The situation is similar for other cases.

References

  1. Alvi, M., Zisserman, A., Nellaker, C.: Turning a blind eye: explicit removal of biases and variation from deep neural network embeddings. ECCV Workshops, Lecture Notes in Computer Science 11129 (2019)

    Google Scholar 

  2. Amini, A., Soleimany, A.P., Schwarting, W., Bhatia, S.N., Rus, D.: Uncovering and mitigating algorithmic bias through learned latent structure. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 289–295 (2019)

    Google Scholar 

  3. Bercea, C.I., Wiestler, B., Ruckert, D., Albarqouni, S.: Feddis: disentangled federated learning for unsupervised brain pathology segmentation. arXiv preprint arXiv:2103.03705 (2021)

  4. Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. Proc. Mach. Learn. Res. 81, 77–91 (2018)

    Google Scholar 

  5. Chen, R.J., et al.: Algorithm fairness in AI for medicine and healthcare. arXiv preprint arXiv:2110.00603 (2021)

  6. Cui, Y., Jia, M., Lin, T.Y., Song, Y., Belongie, S.: Class-balanced loss based on effective number of samples. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9268–9277 (2019)

    Google Scholar 

  7. Denton, E., Hutchinson, B., Mitchell, M., Gebru, T.: Detecting bias with generative counterfactual face attribute augmentation. In: CoRR (2019)

    Google Scholar 

  8. Denton, E., Hutchinson, B., Mitchell, M., Gebru, T., Zaldivar, A.: Image counterfactual sensitivity analysis for detecting unintended bias. In: CVPR Workshop on Fairness Accountability Transparency and Ethics in Computer Vision (2019)

    Google Scholar 

  9. Dhar, P., Gleason, J., Roy, A., Castillo, C.D., Chellappa, R.: Pass: protected attribute suppression system for mitigating bias in face recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 15087–15096 (2021)

    Google Scholar 

  10. Dwork, C., Immorlica, N., Kalai, A.T., Leiserson, M.: Decoupled classifiers for group-fair and efficient machine learning. In: Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR 81, 119–133 (2018)

    Google Scholar 

  11. Georgopoulos, M., Oldfield, J., Nicolaou, M.A., Panagakis, Y., Pantic, M.: Mitigating demographic bias in facial datasets with style-based multi-attribute transfer. Int. J. Comput. Vis. 129, 2288–2307 (2021)

    Article  Google Scholar 

  12. Gong, S., Liu, X., Jain, A.K.: Mitigating face recognition bias via group adaptive classifier. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3414–3424 (2021)

    Google Scholar 

  13. Hetey, R.C., Eberhardt, J.L.: The numbers don’t speak for themselves: racial disparities and the persistence of inequality in the criminal justice system. Current Directions Psychol. Sci. 27(3), 183–187 (2018)

    Article  Google Scholar 

  14. Hou, X., Li, Y., Wang, S.: Disentangled representation for age-invariant face recognition: a mutual information minimization perspective. In: IEEE/CVF International Conference on Computer Vision (ICCV), pp. 3672–3681 (2021)

    Google Scholar 

  15. Jacobs, J.A.: Gender inequality and higher education. Ann. Rev. Sociol. 22, 153–185 (1996)

    Article  Google Scholar 

  16. Jalal, A., Karmalkar, S., Hoffmann, J., Dimakis, A.G., Price, E.: Fairness for image generation with uncertain sensitive attributes. In: Proceedings of the 38th International Conference on Machine Learning (2021)

    Google Scholar 

  17. Joo, J., Karkkainen, K.: Gender slopes counterfactual fairness for computer vision models by attribute manipulation. In: Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia (2020)

    Google Scholar 

  18. Jung, S., Lee, D., Park, T., Moon, T.: Fair feature distillation for visual recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12110–12119 (2021)

    Google Scholar 

  19. Karkkainen, K., Joo, J.: Fairface: face attribute dataset for balanced race, gender, and age for bias measurement and mitigation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1548–1558 (2021)

    Google Scholar 

  20. Ke, J., She, Y., Lu, Y.: Style normalization in histology with federated learning. In: IEEE 18th International Symposium on Biomedical Imaging (ISBI), pp. 953–956 (2021)

    Google Scholar 

  21. Kim, B., Kim, H., Kim, K., Kim, S., Kim, J.: Learning not to learn: training deep neural networks with biased data. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9012–9020 (2019)

    Google Scholar 

  22. Kinyanjui, N., et al.: Fairness of classifiers across skin tones in dermatology. Med. Image Comput. Comput. Assist. Interv. (MICCAI) 12266, 320–329 (2020)

    Google Scholar 

  23. Larrazabal, A.J., Nieto, N., Peterson, V., Milone, D.H., Ferrante, E.: Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis. Proc. National Acad. Sci. 117(23), 12592–12594 (2020)

    Article  Google Scholar 

  24. Li, P., Zhao, H., Liu, H.: Deep fair clustering for visual learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9070–9079 (2020)

    Google Scholar 

  25. Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: IEEE International Conference on Computer Vision (ICCV), pp. 3730–3738 (2015)

    Google Scholar 

  26. McDuff, D., Song, Y., Kapoor, A., Ma, S.: Characterizing bias in classifiers using generative models. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems (2019)

    Google Scholar 

  27. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Computing Surveys 54(6) (2021)

    Google Scholar 

  28. Merler, M., Ratha, N., Feris, R.S., Smith, J.R.: Diversity in faces. arXiv preprint arXiv:1901.10436 (2019)

  29. Morales, A., Fierrez, J., Rodriguez, R.V., Tolosana, R.: SensitiveNets: learning agnostic representations with application to face images. IEEE Trans. Pattern Anal. Mach. Intell. 43(6), 2158–2164 (2021)

    Article  Google Scholar 

  30. Ongena, S., Popov, A.: Gender bias and credit access. J. Money, Credit and Banking 48 (2016)

    Google Scholar 

  31. O’Neil, C.: Weapons of math destruction: how big data increases inequality and threatens democracy. Crown Publishing Group (2016)

    Google Scholar 

  32. Quadrianto, N., Sharmanska, V., Thomas, O.: Discovering fair representations in the data domain. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8219–8228 (2019)

    Google Scholar 

  33. Raji, I.D., Gebru, T., Mitchell, M., Buolamwini, J., Lee, J., Denton, E.: Saving face: investigating the ethical concerns of facial recognition auditing. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 145–151 (2020)

    Google Scholar 

  34. Rajotte, J.F., Mukherjee, S., Robinson, C., et. al.: Reducing bias and increasing utility by federated generative modeling of medical images using a centralized adversary. In: Proceedings of the Conference on Information Technology for Social Good, pp. 79–84 (2021)

    Google Scholar 

  35. Ramaswamy, V.V., Kim, S.S.Y., Russakovsky, O.: Fair attribute classification through latent space de-biasing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9301–9310 (2021)

    Google Scholar 

  36. Rotemberg, V., Kurtansky, N., Betz-Stablein, B., et al.: A patient-centric dataset of images and metadata for identifying melanomas using clinical context. Sci. Data 8(1), 34 (2021)

    Article  Google Scholar 

  37. Ryu, H.J., Adam, H., Mitchell, M.: Inclusivefacenet: improving face attribute detection with race and gender diversity. Accountability, and Transparency in Machine Learning, Workshop on Fairness (2018)

    Google Scholar 

  38. Seyyed-Kalantari, L., Liu, G., McDermott, M., Chen, I.Y., Ghassemi, M.: Chexclusion: fairness gaps in deep chest x-ray classifiers. Pacific Sympsium On Biocomput. 26, 232–243 (2021)

    Google Scholar 

  39. Sharma, A.K., Foroosh, H.: Slim-CNN: a light-weight CNN for face attribute prediction. In: 15th IEEE International Conference on Automatic Face and Gesture Recognition, pp. 329–335 (2020)

    Google Scholar 

  40. Song, J., Shen, C., Yang, Y., Liu, Y., Song, M.: Transductive unbiased embedding for zero-shot learning. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1024–1033 (2018)

    Google Scholar 

  41. Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural networks. In: Proceedings of the 36th International Conference on Machine Learning 97, 6105–6114 (2019)

    Google Scholar 

  42. Wang, A., Liu, A., Zhang, R., et. al.: Revise: a tool for measuring and mitigating Bias in visual datasets. Int. J. Comput. Vis. 130, 1790-1810 (2022)

    Google Scholar 

  43. Wang, M., Deng, W.: Mitigate bias in face recognition using skewness-aware reinforcement learning. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9322–9331 (2020)

    Google Scholar 

  44. Wang, M., Deng, W., Jiani Hu, J.P., Tao, X., Huang, Y.: Racial faces in-the-wild: reducing racial bias by deep unsupervised domain adaptation. In: IEEE/CVF International Conference on Computer Vision (ICCV), pp. 692–702 (2019)

    Google Scholar 

  45. Wang, Z., Qinami, K., Karakozis, I.C., et. al.: Towards fairness in visual recognition: effective strategies for bias mitigation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8916–8925 (2020)

    Google Scholar 

  46. Xu, H., Liu, X., Li, Y., Jain, A., Tang, J.: To be robust or to be fair: towards fairness in adversarial training. In: Proceedings of the 38th International Conference on Machine Learning (PMLR) 139, 11492–11501 (2021)

    Google Scholar 

  47. Xu, X., et al.: Consistent instance false positive improves fairness in face recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 578–586 (2021)

    Google Scholar 

  48. Zhang, B.H., Lemoine, B., Mitchell, M.: Mitigating unwanted biases with adversarial learning. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics and Society (2018)

    Google Scholar 

  49. Zhang, Z., Song, Y., Qi, H.: Age progression/regression by conditional adversarial autoencoder. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5810–5818 (2017)

    Google Scholar 

  50. Zhao, C., Li, C., Li, J., Chen, F.: Fair meta-learning for few-shot classification. In: 2020 IEEE International Conference on Knowledge Graph (ICKG), pp. 275–282 (2020)

    Google Scholar 

  51. Zhao, J., Wang, T., Yatskar, M., Ordonez, V., Chang, K.W.: Men also like shopping: reducing gender bias amplification using corpus-level constraints. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2979–2989 (2017)

    Google Scholar 

Download references

Acknowledgement

This work primarily received funding from the German Federal Ministry of Education and Research (BMBF) under Software Campus (grant 01IS17044) and the Competence Center for Big Data and AI ScaDS.AI Dresden/Leipzig (grant 01/S18026A-F). The work also received funding from Deutsche Forschungsgemeinschaft (DFG) (grant 389792660) as part of TRR 248 and the Cluster of Excellence CeTI (EXC2050/1, grant 390696704). The authors gratefully acknowledge the Center for Information Services and HPC (ZIH) at TU Dresden for providing computing resources.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tobias Hänel .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (zip 877 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hänel, T. et al. (2023). Enhancing Fairness of Visual Attribute Predictors. In: Wang, L., Gall, J., Chin, TJ., Sato, I., Chellappa, R. (eds) Computer Vision – ACCV 2022. ACCV 2022. Lecture Notes in Computer Science, vol 13846. Springer, Cham. https://doi.org/10.1007/978-3-031-26351-4_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-26351-4_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-26350-7

  • Online ISBN: 978-3-031-26351-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics