Skip to main content

Studying Bias in GANs Through the Lens of Race

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13673))

Included in the following conference series:

Abstract

In this work, we study how the performance and evaluation of generative image models are impacted by the racial composition of their training datasets. By examining and controlling the racial distributions in various training datasets, we are able to observe the impacts of different training distributions on generated image quality and the racial distributions of the generated images. Our results show that the racial compositions of generated images successfully preserve that of the training data. However, we observe that truncation, a technique used to generate higher quality images during inference, exacerbates racial imbalances in the data. Lastly, when examining the relationship between image quality and race, we find that the highest perceived visual quality images of a given race come from a distribution where that race is well-represented, and that annotators consistently prefer generated images of white people over those of Black people.

V. H. Maluleke and N. Thakkar—Equal contribution in alphabetical order.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We do not objectively evaluate the underlying actual race, but rather measure the perceived race of the image. This is because race is a complex social construct and it is not sufficient to evaluate race with only visual features. See Sect. 3.1.

References

  1. Abdal, R., Qin, Y., Wonka, P.: Image2StyleGAN++: how to edit the embedded images? In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8296–8305 (2020)

    Google Scholar 

  2. Ackley, D.H., Hinton, G.E., Sejnowski, T.J.: A learning algorithm for Boltzmann machines. Cogn. Sci. 9(1), 147–169 (1985)

    Article  Google Scholar 

  3. Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 70, pp. 214–223. PMLR, 06–11 August 2017. https://proceedings.mlr.press/v70/arjovsky17a.html

  4. Arora, S., Zhang, Y.: Do GANs actually learn the distribution? An empirical study (2017)

    Google Scholar 

  5. Bau, D., et al.: Seeing what a GAN cannot generate (2019)

    Google Scholar 

  6. Benjamin, R.: Race after technology: abolitionist tools for the new Jim code. Social forces (2019)

    Google Scholar 

  7. Borji, A.: Pros and cons of GAN evaluation measures. Comput. Vis. Image Underst. 179, 41–65 (2019)

    Article  Google Scholar 

  8. Brock, A., Donahue, J., Simonyan, K.: Large scale GAN training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096 (2018)

  9. Brown, T., Askell, A., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020)

    Google Scholar 

  10. Brunet, M.E., Alkalay-Houlihan, C., Anderson, A., Zemel, R.: Understanding the origins of bias in word embeddings. In: International Conference on Machine Learning, pp. 803–811. PMLR (2019)

    Google Scholar 

  11. Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. In: Conference on Fairness, Accountability and Transparency, pp. 77–91. PMLR (2018)

    Google Scholar 

  12. Caron, F., Doucet, A.: Efficient Bayesian inference for generalized Bradley-terry models. J. Comput. Graph. Stat. 21(1), 174–196 (2012)

    Article  MathSciNet  Google Scholar 

  13. Dieng, A.B., Ruiz, F.J.R., Blei, D.M., Titsias, M.K.: Prescribed generative adversarial networks (2019)

    Google Scholar 

  14. Gahaku, A.: Ai gahaku (2019)

    Google Scholar 

  15. Goodfellow, I.: NIPS 2016 tutorial: generative adversarial networks (2017)

    Google Scholar 

  16. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)

    Google Scholar 

  17. Goodfellow, I.J., et al.: Generative adversarial networks (2014)

    Google Scholar 

  18. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.: Improved training of Wasserstein GANs (2017)

    Google Scholar 

  19. Gwilliam, M., Hegde, S., Tinubu, L., Hanson, A.: Rethinking common assumptions to mitigate racial bias in face recognition datasets. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, pp. 4123–4132, October 2021

    Google Scholar 

  20. Hanna, A., Denton, E., Smart, A., Smith-Loud, J.: Towards a critical race methodology in algorithmic fairness. CoRR abs/1912.03593 (2019). https://arxiv.org/abs/1912.03593

  21. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  22. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local nash equilibrium. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, pp. 6629–6640 (2017)

    Google Scholar 

  23. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Adv. Neural. Inf. Process. Syst. 33, 6840–6851 (2020)

    Google Scholar 

  24. Ho, J., Salimans, T.: Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598 (2022)

  25. Hooker, S.: Moving beyond “algorithmic bias is a data problem.” Patterns 2(4), 100241 (2021) https://doi.org/10.1016/j.patter.2021.100241. https://www.sciencedirect.com/science/article/pii/S2666389921000611

  26. Härkönen, E., Hertzmann, A., Lehtinen, J., Paris, S.: GANSpace: discovering interpretable GAN controls. In: Proceedings of the NeurIPS (2020)

    Google Scholar 

  27. Jain, N., Manikonda, L., Hernandez, A.O., Sengupta, S., Kambhampati, S.: Imagining an engineer: on GAN-based data augmentation perpetuating biases. arXiv preprint arXiv:1811.03751 (2018)

  28. Jain, N., Olmo, A., Sengupta, S., Manikonda, L., Kambhampati, S.: Imperfect imaganation: Implications of GANs exacerbating biases on facial data augmentation and snapchat selfie lenses. arXiv preprint arXiv:2001.09528 (2020)

  29. Jeff, L., Surya, M., Lauren, K., Julia, A.: How we analyzed the COMPAS recidivism algorithm. Propublica (2016)

    Google Scholar 

  30. Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. arXiv preprint arXiv:2006.06676 (2020)

  31. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4401–4410 (2019)

    Google Scholar 

  32. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks (2019)

    Google Scholar 

  33. Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of StyleGAN. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8110–8119 (2020)

    Google Scholar 

  34. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)

  35. Klare, B.F., Burge, M.J., Klontz, J.C., Vorder Bruegge, R.W., Jain, A.K.: Face recognition performance: role of demographic information. IEEE Trans. Inf. Forensics Secur. 7(6), 1789–1801 (2012). https://doi.org/10.1109/TIFS.2012.2214212

    Article  Google Scholar 

  36. Kärkkäinen, K., Joo, J.: Fairface: face attribute dataset for balanced race, gender, and age for bias measurement and mitigation. In: 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1547–1557 (2021). https://doi.org/10.1109/WACV48630.2021.00159

  37. Lucas, M., Brendan, H.: Github (2022). https://github.com/lucasmaystre/choix.git

  38. Mao, Q., Lee, H.Y., Tseng, H.Y., Ma, S., Yang, M.H.: Mode seeking generative adversarial networks for diverse image synthesis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019

    Google Scholar 

  39. Marchesi, M.: Megapixel size image creation using generative adversarial networks (2017)

    Google Scholar 

  40. Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: PULSE: self-supervised photo upsampling via latent space exploration of generative models. CoRR abs/2003.03808 (2020). https://arxiv.org/abs/2003.03808

  41. Mitchell, M., et al.: Model cards for model reporting. CoRR abs/1810.03993 (2018). https://arxiv.org/abs/1810.03993

  42. Noble, S.U.: Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, New York (2018)

    Book  Google Scholar 

  43. O’Neil, C.: Weapons of Math Destruction How Big Data Increases Inequality and Threatens Democracy. Penguin Books, London (2018)

    MATH  Google Scholar 

  44. Phillips, P.J., Jiang, F., Narvekar, A., Ayyad, J., O’Toole, A.J.: An other-race effect for face recognition algorithms. ACM Trans. Appl. Percept. 8(2) (2011). https://doi.org/10.1145/1870076.1870082

  45. Ponce, J., et al.: Dataset issues in object recognition. In: Ponce, J., Hebert, M., Schmid, C., Zisserman, A. (eds.) Toward Category-Level Object Recognition. LNCS, vol. 4170, pp. 29–48. Springer, Heidelberg (2006). https://doi.org/10.1007/11957959_2

    Chapter  Google Scholar 

  46. Radford, A., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763. PMLR (2021)

    Google Scholar 

  47. Raji, I.D., Buolamwini, J.: Actionable auditing: investigating the impact of publicly naming biased performance results of commercial AI products. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 429–435 (2019)

    Google Scholar 

  48. Razavi, A., Van den Oord, A., Vinyals, O.: Generating diverse high-fidelity images with VQ-VAE-2. In: Advances in Neural Information Processing Systems 32 (2019)

    Google Scholar 

  49. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

    Google Scholar 

  50. Torralba, A., Efros, A.A.: Unbiased look at dataset bias. In: CVPR 2011, pp. 1521–1528. IEEE (2011)

    Google Scholar 

  51. Wang, M., Deng, W., Hu, J., Tao, X., Huang, Y.: Racial faces in the wild: reducing racial bias by information maximization adaptation network. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 692–702 (2019)

    Google Scholar 

  52. Wang, Z., et al.: Towards fairness in visual recognition: effective strategies for bias mitigation. CoRR abs/1911.11834 (2019). https://arxiv.org/abs/1911.11834

  53. Wilkes, M., Wright, C.Y., du Plessis, J.L., Reeder, A.: Fitzpatrick skin type, individual typology angle, and melanin index in an African population: steps toward universally applicable skin photosensitivity assessments. JAMA Dermatol. 151(8), 902–903 (2015)

    Article  Google Scholar 

Download references

Acknowledgement

We thank Hany Farid, Judy Hoffman, Aaron Hertzmann, Bryan Russell, and Deborah Raji for useful discussions and feedback. This work was supported by the BAIR/BDD sponsors, ONR MURI N00014-21-1-2801, and NSF Graduate Fellowships. The study of annotator bias was performed under IRB Protocol ID 2022-04-15236.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vongani H. Maluleke .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 6724 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Maluleke, V.H. et al. (2022). Studying Bias in GANs Through the Lens of Race. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13673. Springer, Cham. https://doi.org/10.1007/978-3-031-19778-9_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19778-9_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19777-2

  • Online ISBN: 978-3-031-19778-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics