Skip to main content

Advertisement

Log in

Educating Software and AI Stakeholders About Algorithmic Fairness, Accountability, Transparency and Ethics

  • ARTICLE
  • Published:
International Journal of Artificial Intelligence in Education Aims and scope Submit manuscript

Abstract

This paper discusses educating stakeholders of algorithmic systems (systems that apply Artificial Intelligence/Machine learning algorithms) in the areas of algorithmic fairness, accountability, transparency and ethics (FATE). We begin by establishing the need for such education and identifying the intended consumers of educational materials on the topic. We discuss the topics of greatest concern and in need of educational resources; we also survey the existing materials and past experiences in such education, noting the scarcity of suitable material on aspects of fairness in particular. We use an example of a college admission platform to illustrate our ideas. We conclude with recommendations for further work in the area and report on the first steps taken towards achieving this goal in the framework of an academic graduate seminar course, a graduate summer school, an embedded lecture in a software engineering course, and a workshop for high school teachers.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. https://www.mycoted.com/Snowball_Technique

  2. https://www.mycoted.com/Sticking_Dots

  3. https://www.britannica.com/topic/utilitarianism-philosophy

  4. https://unbias.wp.horizon.ac.uk/fairness-toolkit/

  5. https://dataresponsibly.github.io/courses/spring19/

  6. https://geomblog.github.io/fairness/

  7. http://www.cycat.io/

References

  • ACM code of Ethics [Online]. Available: https://www.acm.org/code-of-ethics. [Accessed: 02-Mar-2021].

  • ACM statement on Algorithmic Transparency and accountability [Online]. Available: https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf. [Accessed: 02-Mar-2021].

  • AIS Global IS Education Report. (2018). Shared on EDUglopedia.org by Association for Information Systems, AIS on June 14, 2018, EID: 6149.

  • Baeza-Yates, R. (2018). Bias on the Web. Communications of the ACM, 61(6), 54–61.

    Article  Google Scholar 

  • Bellamy, R.K., Dey, K., Hind, M., Hoffman, S.C., Houde, S., Kannan, K., & Nagar, S. (2018). Ai fairness 360:, An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv preprint arXiv:1810.01943.

  • Bergmann, J., & Sams, A. (2012). Flip your classroom: Reach every student in every class every day. International society for technology in education.

  • Bourque, P., & Fairley, R.E. (2014). Guide to the software engineering body of knowledge (SWEBOK (R)): Version 3.0. IEEE Computer Society Press.

  • Brennan, T., Dieterich, W., & Ehret, B. (2009). Evaluating the predictive validity of the COMPAS risk and needs assessment system. Criminal Justice and Behavior, 36(1), 21–40.

    Article  Google Scholar 

  • Burton, E., Goldsmith, J., & Mattei, N. (2015). Teaching AI ethics using science fiction. AAAI Workshop - Technical Report (2015), pp. 33–37.

  • Camp, L.J. (2006). Varieties of software and their implications for effective democratic government. In Proceedings of the British academy, (Vol. 135 pp. 183–185).

  • Ciolacu, M., Tehrani, A.F., Binder, L., & Svasta, P.M. (2018). Education 4.0-Artificial Intelligence Assisted Higher Education: Early recognition System with Machine Learning to support Students’ Success. In 2018 IEEE 24th International Symposium for Design and Technology in Electronic Packaging (SIITME) (pp. 23–30): IEEE.

  • Cysneiros, L.M., & Werneck, V.M.B. (2009). An Initial Analysis on How Software Transparency and Trust Influence each other. In WER.

  • Dixon, L., Li, J., Sorensen, J., Thain, N., & Vasserman, L. (2018). Measuring and mitigating unintended bias in text classification. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 67–73): ACM.

  • do Prado Leite, J.C.S., & Cappelli, C. (2010). Software transparency. Business & Information Systems Engineering, 2(3), 127–139.

    Article  Google Scholar 

  • Eiband, M., Schneider, H., Bilandzic, M., Fazekas-Con, J., Haug, M., & Hussmann, H. (2018). Bringing Transparency Design into Practice. In 23rd International Conference on Intelligent User Interfaces (pp. 211–223): ACM.

  • Chung, E.-K., Rhee, J.-A.E., Baik, Y.-H., & Oh-Sun, A. (2009). The effect of team-based learning in medical ethics education. Medical Teacher, 31 (11), 1013–1017. https://doi.org/10.3109/01421590802590553.

    Article  Google Scholar 

  • Favaretto, M., De Clercq, E., & Elger, B.S. (2019). Big Data and discrimination: perils, promises and solutions. A systematic review. Journal of Big Data, 6(1), 12.

    Article  Google Scholar 

  • Feldman, M., Friedler, S.A., Moeller, J., Scheidegger, C., & Venkatasubramanian, S. (2015). Certifying and removing disparate impact. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 259–268). Sydney, Australia.

  • Furey, H., & Martin, F. (2019). AI education matters: a modular approach to AI ethics education. AI Matters, 4(4), 13–15.

    Article  Google Scholar 

  • Garrett, N., Beard, N., & Fiesler, C. (2020). More than “If Time Allows” the role of ethics in AI education. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 272–278).

  • Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining Explanations: An Overview of Interpretability of Machine Learning. In 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA) (pp. 80–89): IEEE.

  • Gotterbarn, D. (1999). How the new software engineering code of ethics affects you. IEEE Software, 16(6), 58–64.

    Article  Google Scholar 

  • Gotterbarn, D. (2002). Software engineering ethics. Encyclopedia of Software Engineering.

  • Grosz, B.J., Grant, D.G., Vredenburgh, K., Behrends, J., Hu, L., Simmons, A., & Waldo, J. (2019). Embedded EthiCS:, integrating ethics across CS education. Communications of the ACM, 62(8), 54–61.

    Article  Google Scholar 

  • Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys (CSUR), 51(5), 1–42.

    Article  Google Scholar 

  • Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems, (Vol. 29 pp. 3315–3323). Barcelona, Spain.

  • Hein, G. (1991). Constructivist learning theory. Institute for Inquiry. Available at: http://www.exploratorium.edu/ifi/resources/constructivistlearning.htmlS.

  • Herkert, J.R. (2005). Ways of thinking about and teaching ethical problem solving: Microethics and macroethics in engineering. Science and Engineering Ethics, 11(3), 373–385.

    Article  Google Scholar 

  • Holstein, K., Wortman Vaughan, J., Daumé, H. III, Dudik, M., & Wallach, H. (2019). Improving fairness in machine learning systems: What do industry practitioners need?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–16).

  • Horkoff, J. (2019). Non-functional requirements for machine learning: Challenges and new directions. In 2019 IEEE 27th International Requirements Engineering Conference (RE) (pp. 386–391): IEEE.

  • Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.

    Article  Google Scholar 

  • Kamiran, F., & Calders, T. (2012). Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1), 1–33. https://doi.org/10.1007/s10115-011-0463-8.

    Article  Google Scholar 

  • Kamiran, F., Karim, A., & Zhang, X. (2012). Decision theory for discrimination-aware classification. In IEEE International Conference on Data Mining. https://doi.org/10.1109/ICDM.2012.45 (pp. 924–929).

  • Kamishima, T., Akaho, S., Asoh, H., & Sakuma, J. (2012). Fairness-aware classifier with prejudice remover regularizer. Machine Learning and Knowledge Discovery in Databases, pp. 35–50.

  • Kilbertus, N., Gascón, A., Kusner, M.J., Veale, M., Gummadi, K.P., & Weller, A. (2018). Blind justice:, fairness with encrypted sensitive attributes. arXiv preprint arXiv:1806.03281.

  • Kučak, D., Juričić, V., & Dambić, G. (2018). Machine learning in education - a survey of current research trends. Annals of DAAAM & Proceedings, pp. 29.

  • Latham, A., & Goltz, S. (2019). A survey of the general public’s views on the ethics of using AI in education. In International Conference on Artificial Intelligence in Education (pp. 194–206). Cham: Springer.

  • Levy, J., Mussack, D., Brunner, M., Keller, U., Cardoso-Leite, P., & Fischbach, A. (2190). Contrasting classical and machine learning approaches in the estimation of value-added scores in large-scale educational data. Frontiers in Psychology, pp. 11.

  • Li, J., & Fu, S. (2012). A systematic approach to engineering ethics education. Science and Engineering Ethics, 18(2), 339–349.

    Article  Google Scholar 

  • Liao, Q.V., Gruen, D., & Miller, S. (2020). Questioning the AI: Informing design practices for explainable AI user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–15).

  • Lorca, A.L., Burrows, R., & Sterling, L. (2018). Teaching motivational models in agile requirements engineering. In 2018 IEEE 8th International Workshop on Requirements Engineering Education and Training (REET) (pp. 30–39): IEEE.

  • Maiden, N., Jones, S., Karlsen, K., Neill, R., Zachos, K., & Milne, A. (2010). Requirements engineering as creative problem solving: A research agenda for idea finding. In 2010 18th IEEE International Requirements Engineering Conference (pp. 57–66): IEEE.

  • Marcinkowski, F., Kieslich, K., Starke, C., & Lünich, M. (2020). Implications of AI (un-) fairness in higher education admissions: the effects of perceived AI (un-) fairness on exit, voice and organizational reputation. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 122–130).

  • Meunier, P. (2008). Software transparency and purity. Communications of the ACM, 51(2), 104–104.

    Article  Google Scholar 

  • Morales-Ramirez, I., & Alva-Martinez, L.H. (2018). Requirements analysis skills: How to train practitioners?. In 2018 IEEE 8th International Workshop on Requirements Engineering Education and Training (REET) (pp. 24–29): IEEE.

  • Nuseibeh, B., & Easterbrook, S. (2000). Requirements engineering: a roadmap. In Proceedings of the Conference on the Future of Software Engineering (pp. 35–46): ACM.

  • Newman, D, Hettich, S, Blake, C, & Merz, C. (1998). UCI repository of machine learning databases. http://archive.ics.uci.edu/ml.

  • Ozgonul, L., & Alimoglu, M.K. (2019). Comparison of lecture and team-based learning in medical ethics education. Nursing Ethics, 26(3), 903–913.

    Article  Google Scholar 

  • Pedreschi, D., Giannotti, F., Guidotti, R., Monreale, A., Pappalardo, L., Ruggieri, S., & Turini, F. (2018). Open the Black Box Data-Driven Explanation of Black Box Decision Systems. arXiv preprint arXiv:1806.09936.

  • Pedreschi, D., Ruggieri, S., & Turini, F. (2009). Integrating induction and deduction for finding evidence of discrimination . In Proceedings of the 12th International Conference on Artificial Intelligence and Law (pp. 157–166): ACM.

  • Pleiss, G., Raghavan, M., Wu, F., Kleinberg, J., & Weinberger, K.Q. (2017). On fairness and calibration. In Advances in Neural Information Processing Systems (pp. 5680–5689).

  • Portugal, R.L.Q., Engiel, P., Roque, H., & do Prado Leite, J.C.S. (2017). Is there a demand of software transparency?. In Proceedings of the 31st Brazilian Symposium on Software Engineering (pp. 204–213): ACM.

  • Prinsloo, P. (2020). Of ’black boxes’ and algorithmic decision-making in (higher) education–A commentary. Big Data & Society, 7(1), 2053951720933994.

    Article  Google Scholar 

  • Raji, I.D., Smart, A., White, R.N., Mitchell, M., Gebru, T., Hutchinson, B., & Barnes, P. (2020). Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 33–44).

  • Reiser, R.A., & Dick, W. (1995). Instructional planning: A guide for teachers, 2nd edn. Allyn and Bacon: Boston.

    Google Scholar 

  • Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215.

    Article  Google Scholar 

  • Samek, W., Wiegand, T., & Müller, K.R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296.

  • Sandler, K., Ohrstrom, L., & Moy, L. (2010). Andamp; McVay, R Killed by code: Software transparency in implantable medical devices. Software Freedom Law Center, pp. 308–319.

  • Sommerville, I. (2015). Software Engineering. 10th. In Book Software Engineering. 10th, Series Software Engineering: Addison-Wesley.

  • Tal, A.S., Batsuren, K., Bogina, V., Giunchiglia, F., Hartman, A., Loizou, S.K., & Otterbacher, J. (2019). “End to End” towards a framework for reducing biases and promoting transparency of algorithmic systems. In 2019 14th International Workshop on Semantic and Social Media Adaptation and Personalization (SMAP) (pp. 1–6): IEEE.

  • Thomson, J.J. (1985). The trolley problem. The Yale Law Journal, 94(6), 1395–1415.

    Article  Google Scholar 

  • Tu, Y.C. (2014). Transparency in Software Engineering. ResearchSpace@ Auckland): Doctoral dissertation.

    Google Scholar 

  • Zemel, R., Wu, Y.L., Swersky, K., Pitassi, T., & Dwork, C. (2013). Learning fair representations.

  • Zhang, B.H., Lemoine, B., & Mitchell, M. (2018). Mitigating unwanted biases with adversarial learning.

  • Zhang, J.M., Harman, M., Ma, L., & Liu, Y. (2020). Machine learning testing: Survey, landscapes and horizons. IEEE Transactions on Software Engineering.

Download references

Acknowledgements

This research has been partly supported by the CyCAT, which has received funding from the European Union’s Horizon 2020 Research and Innovation Program under Grant Agreement No. 810105. The Barcelona workshop was supported by funding from the SeeRRI project under Grant Agreement No. 824588.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Veronika Bogina.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

All authors contributed equally to this work.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bogina, V., Hartman, A., Kuflik, T. et al. Educating Software and AI Stakeholders About Algorithmic Fairness, Accountability, Transparency and Ethics. Int J Artif Intell Educ 32, 808–833 (2022). https://doi.org/10.1007/s40593-021-00248-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40593-021-00248-0

Keywords

Navigation