skip to main content
10.1145/3491101.3503811acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
extended-abstract

A Human-Centric Perspective on Fairness and Transparency in Algorithmic Decision-Making

Published:28 April 2022Publication History

ABSTRACT

Automated decision systems (ADS) are increasingly used for consequential decision-making. These systems often rely on sophisticated yet opaque machine learning models, which do not allow for understanding how a given decision was arrived at. This is not only problematic from a legal perspective, but non-transparent systems are also prone to yield unfair outcomes because their sanity is challenging to assess and calibrate in the first place—which is particularly worrisome for human decision-subjects. Based on this observation and building upon existing work, I aim to make the following three main contributions through my doctoral thesis: (a) understand how (potential) decision-subjects perceive algorithmic decisions (with varying degrees of transparency of the underlying ADS), as compared to similar decisions made by humans; (b) evaluate different tools for transparent decision-making with respect to their effectiveness in enabling people to appropriately assess the quality and fairness of ADS; and (c) develop human-understandable technical artifacts for fair automated decision-making. Over the course of the first half of my PhD program, I have already addressed substantial pieces of (a) and (c), whereas (b) will be the major focus of the second half.

References

  1. Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.Google ScholarGoogle Scholar
  2. Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2018. Fairness and machine learning. (2018). http://www.fairmlbook.orgGoogle ScholarGoogle Scholar
  3. Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. ’It’s reducing a human being to a percentage’ Perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on Fairness, Accountability, and Transparency. PMLR, 77–91.Google ScholarGoogle Scholar
  5. Adrian Bussone, Simone Stumpf, and Dympna O’Sullivan. 2015. The role of explanations on trust and reliance in clinical decision support systems. In 2015 International Conference on Healthcare Informatics. IEEE, 160–169.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Toon Calders, Faisal Kamiran, and Mykola Pechenizkiy. 2009. Building classifiers with independency constraints. In 2009 IEEE International Conference on Data Mining Workshops. IEEE, 13–18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Lemuria Carter and France Bélanger. 2005. The utilization of e-government services: Citizen trust, innovation and acceptance factors. Information Systems Journal 15, 1 (2005), 5–25.Google ScholarGoogle ScholarCross RefCross Ref
  8. Aaron Chalfin, Oren Danieli, Andrew Hillis, Zubin Jelveh, Michael Luca, Jens Ludwig, and Sendhil Mullainathan. 2016. Productivity and selection of human capital with machine learning. American Economic Review 106, 5 (2016), 124–127.Google ScholarGoogle ScholarCross RefCross Ref
  9. Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data 5, 2 (2017), 153–163.Google ScholarGoogle ScholarCross RefCross Ref
  10. Michael Chromik, Malin Eiband, Sarah Theres Völkel, and Daniel Buschek. 2019. Dark patterns of explainability, transparency, and user control for intelligent systems. In IUI Workshops, Vol. 2327.Google ScholarGoogle Scholar
  11. Jason A Colquitt, Donald E Conlon, Michael J Wesson, Christopher O L H Porter, and K Yee Ng. 2001. Justice at the millennium: A meta-analytic review of 25 years of organizational justice research. Journal of Applied Psychology 86, 3 (2001), 425.Google ScholarGoogle ScholarCross RefCross Ref
  12. Jason A Colquitt and Jessica B Rodell. 2015. Measuring justice and fairness. (2015).Google ScholarGoogle Scholar
  13. Maria De-Arteaga, Riccardo Fogliato, and Alexandra Chouldechova. 2020. A case for humans-in-the-loop: Decisions in the presence of erroneous algorithmic scores. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Jonathan Dodge, Q Vera Liao, Yunfeng Zhang, Rachel KE Bellamy, and Casey Dugan. 2019. Explaining models: An empirical study of how explanations impact fairness judgment. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 275–285.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. 214–226.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Upol Ehsan and Mark O Riedl. 2021. Explainability pitfalls: Beyond dark patterns in explainable AI. arXiv preprint arXiv:2109.12480(2021).Google ScholarGoogle Scholar
  17. Motahhare Eslami, Kristen Vaccaro, Min Kyung Lee, Amit Elazari Bar On, Eric Gilbert, and Karrie Karahalios. 2019. User attitudes towards algorithmic opacity and transparency in online reviewing platforms. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Stefan Feuerriegel, Mateusz Dolata, and Gerhard Schwabe. 2020. Fair AI: Challenges and opportunities. Business & Information Systems Engineering 62 (2020), 379–384.Google ScholarGoogle ScholarCross RefCross Ref
  19. Kate Goddard, Abdul Roudsari, and Jeremy C Wyatt. 2014. Automation bias: Empirical results assessing influencing factors. International Journal of Medical Informatics 83, 5(2014), 368–375.Google ScholarGoogle ScholarCross RefCross Ref
  20. Thomas Grote and Philipp Berens. 2020. On the ethics of algorithmic decision-making in healthcare. Journal of Medical Ethics 46, 3 (2020), 205–211.Google ScholarGoogle ScholarCross RefCross Ref
  21. Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems. 3315–3323.Google ScholarGoogle Scholar
  22. Jeanne G Harris and Thomas H Davenport. 2005. Automated decision making comes of age. MIT Sloan Management Review 46, 4 (2005), 2–10.Google ScholarGoogle Scholar
  23. Will Douglas Heaven. 2020. Predictive policing algorithms are racist. They need to be dismantled.MIT Technology Review(2020).Google ScholarGoogle Scholar
  24. Paul Hitlin. 2016. Research in the crowdsourcing age: A case study. (2016).Google ScholarGoogle Scholar
  25. Basileal Imana, Aleksandra Korolova, and John Heidemann. 2021. Auditing for discrimination in algorithms delivering job ads. In Proceedings of the Web Conference 2021. 3767–3778.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Niki Kilbertus, Manuel Gomez Rodriguez, Bernhard Schölkopf, Krikamol Muandet, and Isabel Valera. 2020. Fair decisions despite imperfect predictions. In International Conference on Artificial Intelligence and Statistics. 277–287.Google ScholarGoogle Scholar
  27. Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. 2017. Avoiding discrimination through causal reasoning. In Advances in Neural Information Processing Systems. 656–666.Google ScholarGoogle Scholar
  28. René F Kizilcec. 2016. How much information? Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 2390–2395.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2016. Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807(2016).Google ScholarGoogle Scholar
  30. Sami Koivunen, Thomas Olsson, Ekaterina Olshannikova, and Aki Lindberg. 2019. Understanding decision-making in recruitment: Opportunities and challenges for information technology. Proceedings of the ACM on Human-Computer Interaction 3, GROUP(2019), 1–22.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Todd Kulesza, Simone Stumpf, Margaret Burnett, Sherry Yang, Irwin Kwan, and Weng-Keen Wong. 2013. Too much, too little, or just right? Ways explanations impact end users’ mental models. In 2013 IEEE Symposium on Visual Languages and Human Centric Computing. IEEE, 3–10.Google ScholarGoogle ScholarCross RefCross Ref
  32. Nathan R Kuncel, David M Klieger, and Deniz S Ones. 2014. In hiring, algorithms beat instinct. Harvard Business Review(2014).Google ScholarGoogle Scholar
  33. Himabindu Lakkaraju, Jon Kleinberg, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. 2017. The selective labels problem: Evaluating algorithmic predictions in the presence of unobservables. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 275–284.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Markus Langer, Kevin Baum, Cornelius J König, Viviane Hähne, Daniel Oster, and Timo Speith. 2021. Spare me the details: How the type of information about automated interviews influences applicant reactions. International Journal of Selection and Assessment (2021).Google ScholarGoogle ScholarCross RefCross Ref
  35. Markus Langer, Daniel Oster, Timo Speith, Holger Hermanns, Lena Kästner, Eva Schmidt, Andreas Sesing, and Kevin Baum. 2021. What do we want from explainable artificial intelligence (XAI)?—A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence 296 (2021), 103473.Google ScholarGoogle ScholarCross RefCross Ref
  36. Min Kyung Lee. 2018. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society 5, 1 (2018), 1–16.Google ScholarGoogle ScholarCross RefCross Ref
  37. Min Kyung Lee, Anuraag Jain, Hea Jin Cha, Shashank Ojha, and Daniel Kusbit. 2019. Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation. Proceedings of the ACM on Human-Computer Interaction 3, CSCW(2019), 182:1–182:26.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Min Kyung Lee and Katherine Rich. 2021. Who is included in human perceptions of AI? Trust and perceived fairness around healthcare AI and cultural mistrust. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Bruno Lepri, Jacopo Staiano, David Sangokoya, Emmanuel Letouzé, and Nuria Oliver. 2017. The tyranny of data? The bright and dark sides of data-driven decision-making for social good. In Transparent Data Mining for Big and Small Data. Springer, 3–24.Google ScholarGoogle Scholar
  40. Clayton Lewis and Robert Mack. 1982. The role of abduction in learning to use a computer system. (1982).Google ScholarGoogle Scholar
  41. Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 4768–4777.Google ScholarGoogle Scholar
  42. Arunesh Mathur, Gunes Acar, Michael J Friedman, Elena Lucherini, Jonathan Mayer, Marshini Chetty, and Arvind Narayanan. 2019. Dark patterns at scale: Findings from a crawl of 11K shopping websites. Proceedings of the ACM on Human-Computer Interaction 3, CSCW(2019), 1–32.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR) 54, 6 (2021), 1–35.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Deirdre K Mulligan, Joshua A Kroll, Nitin Kohli, and Richmond Y Wong. 2019. This thing called fairness: Disciplinary confusion realizing a value in technology. Proceedings of the ACM on Human-Computer Interaction 3, CSCW(2019), 1–36.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Sue Newell and Marco Marabelli. 2015. Strategic opportunities (and challenges) of algorithmic decision-making: A call for action on the long-term societal effects of ‘datification’. The Journal of Strategic Information Systems 24, 1 (2015), 3–14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Stefan Palan and Christian Schitter. 2018. Prolific.ac—A subject pool for online experiments. Journal of Behavioral and Experimental Finance 17 (2018), 22–27.Google ScholarGoogle ScholarCross RefCross Ref
  47. Gabriele Paolacci, Jesse Chandler, and Panagiotis G Ipeirotis. 2010. Running experiments on Amazon Mechanical Turk. Judgment and Decision making 5, 5 (2010), 411–419.Google ScholarGoogle Scholar
  48. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1135–1144.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Adam Satariano. 2020. British grading debacle shows pitfalls of automating government. The New York Times (2020). https://www.nytimes.com/2020/08/20/world/europe/uk-england-grading-algorithm.htmlGoogle ScholarGoogle Scholar
  50. Jakob Schoeffer and Niklas Kuehl. 2021. Appropriate fairness perceptions? On the effectiveness of explanations in enabling people to assess the fairness of automated decision systems. In Companion Publication of the 24th ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW ’21 Companion).Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Jakob Schoeffer, Niklas Kuehl, and Isabel Valera. 2021. A ranking approach to fair classification. In COMPASS ’21: Proceedings of the 4th ACM SIGCAS Conference on Computing and Sustainable Societies.Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Jakob Schoeffer, Yvette Machowski, and Niklas Kuehl. 2021. Perceptions of fairness and trustworthiness based on explanations in human vs. automated decision-making. In 55th Hawaii International Conference on System Sciences 2022 (HICSS-55).Google ScholarGoogle Scholar
  53. Jakob Schoeffer, Yvette Machowski, and Niklas Kuehl. 2021. A study on fairness and trust perceptions in automated decision making. In Joint Proceedings of the ACM IUI 2021 Workshops.Google ScholarGoogle Scholar
  54. Jakob Schoeffer, Yvette Machowski, and Niklas Kuehl. 2021. “There is not enough information”: On the effects of transparency on perceptions of informational fairness and trustworthiness in automated decision making. Preprint (2021).Google ScholarGoogle Scholar
  55. Megha Srivastava, Hoda Heidari, and Andreas Krause. 2019. Mathematical notions vs. human perception of fairness: A descriptive approach to fairness for machine learning. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2459–2468.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Sian Townson. 2020. AI can make bank loans more fair. Harvard Business Review(2020).Google ScholarGoogle Scholar
  57. Stefano Triberti, Ilaria Durosini, and Gabriella Pravettoni. 2020. A “third wheel” effect in health decision making involving artificial entities: A psychological perspective. Frontiers in Public Health 8 (2020).Google ScholarGoogle Scholar
  58. Serena Wang and Maya Gupta. 2020. Deontological ethics by monotonicity shape constraints. In International Conference on Artificial Intelligence and Statistics. PMLR, 2043–2054.Google ScholarGoogle Scholar

Index Terms

  1. A Human-Centric Perspective on Fairness and Transparency in Algorithmic Decision-Making
            Index terms have been assigned to the content through auto-classification.

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in
            • Published in

              cover image ACM Conferences
              CHI EA '22: Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems
              April 2022
              3066 pages
              ISBN:9781450391566
              DOI:10.1145/3491101

              Copyright © 2022 Owner/Author

              Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 28 April 2022

              Check for updates

              Qualifiers

              • extended-abstract
              • Research
              • Refereed limited

              Acceptance Rates

              Overall Acceptance Rate6,164of23,696submissions,26%

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            HTML Format

            View this article in HTML Format .

            View HTML Format