Skip to main content

Preserving Utility in Fair Top-k Ranking with Intersectional Bias

  • Conference paper
  • First Online:
Advances in Bias and Fairness in Information Retrieval (BIAS 2023)

Abstract

Ranking is required for many real applications, such as search, personalisation, recommendation, and filtering. Recent research has focused on developing reliable ranking algorithms that maintain fairness in their outcomes. However, only a few consider multiple protected groups since this extension introduces significant challenges. While useful in the research sector, considering only one binary sensitive feature for handling fairness is inappropriate when the algorithm must be deployed responsibly in real-world applications.

Our work is built on top of Multinomial FA*IR, a Fair Top-k ranking with multiple protected groups, which we extend to provide users the option to balance fairness and utility, adapting the final ranking accordingly. Our experimental results show that alternative better solutions overlooked by Multinomial FA*IR may be found through our approach without violating fairness boundaries. The code of the implemented solution and the experiments are publicly available to the community as a GitHub repository.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In this work we shall use the terms “protected” and “sensitive” group interchangeably to refer to a set of personal attributes, typically”disadvantaged”, that must be considered when dealing with fairness and discrimination issues.

  2. 2.

    E.g. for two protected groups, at position \(j=6\), \(\omega _p=[3,2]\) means that there are three individuals from \(\mathcal {G}^1\), two from \(\mathcal {G}^2\) and the remaining from the non-protected group \(\mathcal {G}^0\).

  3. 3.

    Dataset available at https://archive.ics.uci.edu/ml/datasets/statlog+(german+credit+data).

References

  1. Boratto, L., Marras, M.: Advances in bias-aware recommendation on the web. In: Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pp. 1147–1149 (2021)

    Google Scholar 

  2. Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. In: FAT, pp. 77–91. PMLR (2018)

    Google Scholar 

  3. Cambria, E., Malandri, L., Mercorio, F., Mezzanzanica, M., Nobani, N.: A survey on XAI and natural language explanations. Inf. Process. Manag. 60(1), 103111 (2023). https://doi.org/10.1016/j.ipm.2022.103111

  4. Castelnovo, A., Cosentini, A., Malandri, L., Mercorio, F., Mezzanzanica, M.: FFTree: a flexible tree to handle multiple fairness criteria. Inf. Process. Manag. 59(6), 103099 (2022)

    Article  Google Scholar 

  5. Castelnovo, A., et al.: BeFair: addressing fairness in the banking sector. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 3652–3661. IEEE (2020)

    Google Scholar 

  6. Castelnovo, A., Crupi, R., Greco, G., Regoli, D., Penco, I.G., Cosentini, A.C.: A clarification of the nuances in the fairness metrics landscape. Sci. Rep. 12(1), 1–21 (2022)

    Article  Google Scholar 

  7. Castelnovo, A., Crupi, R., Inverardi, N., Regoli, D., Cosentini, A.: Investigating bias with a synthetic data generator: empirical evidence and philosophical interpretation. arXiv preprint arXiv:2209.05889 (2022)

  8. Castelnovo, A., Malandri, L., Mercorio, F., Mezzanzanica, M., Cosentini, A.: Towards fairness through time. In: Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2021. Communications in Computer and Information Science, vol. 1524, pp. 647–663. Springer (2021). https://doi.org/10.1007/978-3-030-93736-2_46

  9. Council of Europe, committee of experts on Internet MSI-NET: study on the human rights dimensions of automated data processing techniques and possible regulatory implications (2017)

    Google Scholar 

  10. Crenshaw, K.: Demarginalizing the intersection of race and sex: a black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. In: Feminist legal theories, pp. 23–51. Routledge (2013)

    Google Scholar 

  11. Crupi, R., Castelnovo, A., Regoli, D., San Miguel Gonzalez, B.: Counterfactual explanations as interventions in latent space. Data Mining and Knowledge Discovery, pp. 1–37 (2022). https://doi.org/10.1007/s10618-022-00889-2

  12. European Commission: communication on fostering a European approach to AI. https://ec.europa.eu/newsroom/dae/redirection/document/75790 (2021)

  13. Gupta, A., et al.: Online post-processing in rankings for fair utility maximization. In: Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pp. 454–462 (2021)

    Google Scholar 

  14. Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: Advances in Neural Information Processing Systems 29 (2016)

    Google Scholar 

  15. Kearns, M., Neel, S., Roth, A., Wu, Z.S.: Preventing fairness gerrymandering: auditing and learning for subgroup fairness. In: ICML, pp. 2564–2572. PMLR (2018)

    Google Scholar 

  16. Kendall, M.G.: A new measure of rank correlation. Biometrika 30(1/2), 81–93 (1938)

    Article  MATH  Google Scholar 

  17. Malandri, L., Mercorio, F., Mezzanzanica, M., Nobani, N.: ConvXAI: a system for multimodal interaction with any black-box explainer. Cogn. Comput. 15, 1–32 (2022). https://doi.org/10.1007/s12559-022-10067-7

  18. Malandri, L., Mercorio, F., Mezzanzanica, M., Nobani, N., Seveso, A.: ContrXT: generating contrastive explanations from any text classifier. Inf. Fusion 81, 103–115 (2022). https://doi.org/10.1016/j.inffus.2021.11.016

  19. Malandri, L., Mercorio, F., Mezzanzanica, M., Nobani, N., Seveso, A.: The good, the bad, and the explainer: a tool for contrastive explanations of text classifiers. In: Raedt, L.D. (ed.) Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23–29 July 2022, pp. 5936–5939. ijcai.org (2022). https://doi.org/10.24963/ijcai.2022/858

  20. Ramos, G., Boratto, L.: Reputation (in) dependence in ranking systems: demographics influence over output disparities. In: Proceedings of the 43rd international ACM SIGIR conference on Research and Development in Information Retrieval, pp. 2061–2064 (2020)

    Google Scholar 

  21. Ramos, G., Boratto, L., Marras, M.: Robust reputation independence in ranking systems for multiple sensitive attributes. Mach. Learn. 111(10), 3769–3796 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  22. Robertson, S.E.: The probability ranking principle in IR. J. Document. 33, 294–304 (1977)

    Google Scholar 

  23. Singh, A., Joachims, T.: Fairness of exposure in rankings. In: Proceedings of the 24th ACM SIGKDD ICKDDM, pp. 2219–2228 (2018)

    Google Scholar 

  24. Taesiri, M.R., Nguyen, G., Nguyen, A.: Visual correspondence-based explanations improve AI robustness and human-AI team accuracy. In: Advances in Neural Information Processing Systems (2022)

    Google Scholar 

  25. The European Commission: proposal for a Regulation of the European Parliament laying down harmonised rules on AI (AI Act) (2021). https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence

  26. Yang, K., Stoyanovich, J.: Measuring fairness in ranked outputs. In: Proceedings of the 29th International Conference on Scientific and Statistical Database Management (2017)

    Google Scholar 

  27. Zehlike, M., Bonchi, F., Castillo, C., Hajian, S., Megahed, M., Baeza-Yates, R.: Fa* ir: a fair top-k ranking algorithm. In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (2017)

    Google Scholar 

  28. Zehlike, M., Sühr, T., Baeza-Yates, R., Bonchi, F., Castillo, C., Hajian, S.: Fair top-k ranking with multiple protected groups. IPM 59(1), 102707 (2022)

    Google Scholar 

  29. Zehlike, M., Yang, K., Stoyanovich, J.: Fairness in ranking, part ii: learning-to-rank and recommender systems. ACM Comput. Surv. (CSUR) 55, 3533380 (2022)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alessandro Castelnovo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Alimonda, N., Castelnovo, A., Crupi, R., Mercorio, F., Mezzanzanica, M. (2023). Preserving Utility in Fair Top-k Ranking with Intersectional Bias. In: Boratto, L., Faralli, S., Marras, M., Stilo, G. (eds) Advances in Bias and Fairness in Information Retrieval. BIAS 2023. Communications in Computer and Information Science, vol 1840. Springer, Cham. https://doi.org/10.1007/978-3-031-37249-0_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-37249-0_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-37248-3

  • Online ISBN: 978-3-031-37249-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics