skip to main content
10.1145/3531146.3533150acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Public Access

Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning

Authors Info & Claims
Published:20 June 2022Publication History

ABSTRACT

In 1996, Accountability in a Computerized Society [95] issued a clarion call concerning the erosion of accountability in society due to the ubiquitous delegation of consequential functions to computerized systems. Nissenbaum [95] described four barriers to accountability that computerization presented, which we revisit in relation to the ascendance of data-driven algorithmic systems—i.e., machine learning or artificial intelligence—to uncover new challenges for accountability that these systems present. Nissenbaum’s original paper grounded discussion of the barriers in moral philosophy; we bring this analysis together with recent scholarship on relational accountability frameworks and discuss how the barriers present difficulties for instantiating a unified moral, relational framework in practice for data-driven algorithmic systems. We conclude by discussing ways of weakening the barriers in order to do so.

References

  1. Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. https://www.tensorflow.org/ Software available from tensorflow.org.Google ScholarGoogle Scholar
  2. Kenneth S. Abraham and Robert L. Rabin. 2019. Automated Vehicles and Manufacturer Responsibility for Accidents: A New Legal Regime for a New Era. Virginia Law Review 105(2019), 127–171. Issue 127.Google ScholarGoogle Scholar
  3. Philip Adler, Casey Falk, Sorelle A. Friedler, Tionney Nix, Gabriel Rybeck, Carlos Scheidegger, Brandon Smith, and Suresh Venkatasubramanian. 2018. Auditing black-box models for indirect influence. Knowledge and Information Systems 54 (2018), 95–122.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Shruti Agarwal, Hany Farid, Yuming Gu, Mingming He, Koki Nagano, and Hao Li. 2019. Protecting World Leaders Against Deep Fakes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. IEEE, Long Beach, CA, 8.Google ScholarGoogle Scholar
  5. Ifeoma Ajunwa. 2021. An Auditing Imperative for Automated Hiring. Harv. J.L. & Tech. 34(2021), 81 pages. Issue 1.Google ScholarGoogle Scholar
  6. American Association for Justice. 2017. Driven to Safety: Robot Cars and the Future of Liability., 50 pages.Google ScholarGoogle Scholar
  7. Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias. ProPublica 23, 2016 (May 2016), 139–159.Google ScholarGoogle Scholar
  8. Jack M. Balkin. 2015. The Path of Robotics Law. California Law Review Circuit 6 (2015), 45–60.Google ScholarGoogle Scholar
  9. Jack M. Balkin. 2018. Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation. UC Davis Law Review 51, 3 (2018), 1149–1210.Google ScholarGoogle Scholar
  10. Chelsea Barabas, Colin Doyle, JB Rubinovitz, and Karthik Dinakar. 2020. Studying up: reorienting the study of algorithmic fairness around issues of power. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, New York, NY, USA, 167–176.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Catherine Barrett. 2019. Are the EU GDPR and the California CCPA becoming the de facto global standards for data privacy and protection?Scitech Lawyer 15, 3 (2019), 24–29.Google ScholarGoogle Scholar
  12. David Barstow. 1988. Artificial Intelligence and Software Engineering. In Exploring Artificial Intelligence. Elsevier, 641–670.Google ScholarGoogle Scholar
  13. Elena Beretta, Antonio Vetrò, Bruno Lepri, and Juan Carlos De Martin. 2021. Detecting discriminatory risk through data annotation based on Bayesian inferences. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM, New York, NY, USA, 794–804.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Umang Bhatt, Javier Antorán, Yunfeng Zhang, Q. Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Melançon, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, Lama Nachman, Rumi Chunara, Madhulika Srikumar, Adrian Weller, and Alice Xiang. 2021. Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, New York, NY, USA, 401–413.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Abeba Birhane and Jelle van Dijk. 2020. Robot rights? Let’s talk about human welfare instead. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, New York, NY, USA, 207–213.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Harvard Law Review Editorial Board. 2021. Google LLC v. Oracle America, Inc.https://harvardlawreview.org/2021/11/google-llc-v-oracle-america-inc/Google ScholarGoogle Scholar
  17. Alexei Botchkarev. 2019. A New Typology Design of Performance Metrics to Measure Errors in Machine Learning Regression Algorithms. Interdisciplinary Journal of Information, Knowledge, and Management 14 (2019), 045–076.Google ScholarGoogle ScholarCross RefCross Ref
  18. Neil E. Boudette. 2021. Tesla Says Autopilot Makes Its Cars Safer. Crash Victims Say It Kills. https://www.nytimes.com/2021/07/05/business/tesla-autopilot-lawsuits-safety.htmlGoogle ScholarGoogle Scholar
  19. Xavier Bouthillier, César Laurent, and Pascal Vincent. 2019. Unreproducible Research is Reproducible. In Proceedings of the 36th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 97), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, 725–734.Google ScholarGoogle Scholar
  20. Mark Bovens. 2007. Analysing and assessing accountability: A conceptual framework. European LawJjournal 13, 4 (2007), 447–468.Google ScholarGoogle ScholarCross RefCross Ref
  21. Mark Bovens, Thomas Schillemans, and Robert E Goodin. 2014. Public Accountability. The Oxford Handbook of Public Accountability 1, 1(2014), 1–22.Google ScholarGoogle Scholar
  22. Karen L Boyd. 2021. Datasheets for Datasets help ML Engineers Notice and Understand Ethical Issues in Training Data. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2(2021), 1–27.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Joanna J Bryson, Mihailis E Diamantis, and Thomas D Grant. 2017. Of, for, and by the people: the legal lacuna of synthetic persons. Artificial Intelligence and Law 25, 3 (2017), 273–291.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Ryan Calo. 2015. Robotics and the Lessons of Cyberlaw. California Law Review 103, 3 (2015), 513–563.Google ScholarGoogle Scholar
  25. Ryan Calo. 2021. Modeling Through. Duke Law Journal 72(2021), 28 pages. https://ssrn.com/abstract=3939211 SSRN Preprint.Google ScholarGoogle Scholar
  26. Michael Carbin. 2019. Overparameterization: A connection between software 1.0 and software 2.0. In 3rd Summit on Advances in Programming Languages (SNAPL 2019). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 13 pages.Google ScholarGoogle Scholar
  27. Danielle K. Citron and Ryan Calo. 2020. The Automated Administrative State: A Crisis of Legitimacy., 51 pages. https://scholarship.law.bu.edu/faculty_scholarship/838 Working Paper.Google ScholarGoogle Scholar
  28. Danielle Keats Citron and Daniel J. Solove. 2022. Privacy Harms. Boston University Law Review 102 (2022), 62 pages.Google ScholarGoogle Scholar
  29. Ignacio Cofone and Katherine J. Strandburg. 2019. Strategic Games and Algorithmic Secrecy. McGill Law Journal 623(2019), 41 pages.Google ScholarGoogle Scholar
  30. A. Feder Cooper and Ellen Abrams. 2021. Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, New York, NY, USA, 46–54.Google ScholarGoogle Scholar
  31. A. Feder Cooper and Karen Levy. 2022. Fast or Accurate? Governing Conflicting Goals in Highly Autonomous Vehicles. Colorado Technology Law Journal 20 (2022).Google ScholarGoogle Scholar
  32. A. Feder Cooper, Karen Levy, and Christopher De Sa. 2021. Accuracy-Efficiency Trade-Offs and Accountability in Distributed ML Systems. In Equity and Access in Algorithms, Mechanisms, and Optimization. Association for Computing Machinery, New York, NY, USA, Article 4, 11 pages.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. A. Feder Cooper, Yucheng Lu, Jessica Zosa Forde, and Christopher De Sa. 2021. Hyperparameter Optimization Is Deceiving Us, and How to Stop It. In Advances in Neural Information Processing Systems, Vol. 34. Curran Associates, Inc., Red Hook, NY, USA, 43 pages.Google ScholarGoogle Scholar
  34. Kate Crawford and Jason Schultz. 2014. Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms. B.C. Law Review 55(2014), 93–128. Issue 93.Google ScholarGoogle Scholar
  35. Fernando Delgado, Solon Barocas, and Karen Levy. 2022. An Uncommon Task: Participatory Design in Legal AI. Proceedings of the ACM on Human-Computer Interaction 6, CSCW1, Article 51 (apr 2022), 23 pages. https://doi.org/10.1145/3512898Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 4171–4186.Google ScholarGoogle Scholar
  37. Nicholas Diakopoulos. 2020. Accountability, Transparency, and Algorithms. The Oxford Handbook of Ethics of AI 17, 4 (2020), 197.Google ScholarGoogle Scholar
  38. Finale Doshi-Velez and Been Kim. 2018. Considerations for Evaluation and Generalization in Interpretable Machine Learning. In Explainable and Interpretable Models in Computer Vision and Machine Learning, Hugo Jair Escalante, Sergio Escalera, Isabelle Guyon, Xavier Baró, Yağmur Güçlütürk, Umut Güçlü, and Marcel van Gerven (Eds.). Springer International Publishing, 3–17.Google ScholarGoogle Scholar
  39. Madeleine Clare Elish. 2019. Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction. Engaging Science, Technology, and Society(2019), 29 pages.Google ScholarGoogle Scholar
  40. European Parliament Committee on Legal Affairs. 2017. Report with Recommendations to the Commission on Civil Law Rules on Robotics.Google ScholarGoogle Scholar
  41. Federal Trade Commission. 2014. A Call For Transparency and Accountability: A Report of the Federal Trade Commission., 110 pages.Google ScholarGoogle Scholar
  42. Joel Feinberg. 1968. Collective Responsibility, In Sixty-Fifth Annual Meeting of the American Philosophical Association Eastern Division. The Journal of Philosophy 65, 21, 674–688.Google ScholarGoogle ScholarCross RefCross Ref
  43. Joel Feinberg. 1970. Doing & Deserving: Essays in the Theory of Responsibility. Princeton University Press, Princeton, NJ, USA.Google ScholarGoogle Scholar
  44. Joel Feinberg. 1985. Sua Culpa. In Ethical Issues in the Use of Computers. Princeton University Press, Princeton, NJ, USA, 102–120.Google ScholarGoogle Scholar
  45. Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and Removing Disparate Impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Sydney, NSW, Australia) (KDD ’15). Association for Computing Machinery, New York, NY, USA, 259–268.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Benjamin Fish, Jeremy Kun, and Ádám Dániel Lelkes. 2016. A Confidence-Based Approach for Balancing Fairness and Accuracy. Preprint.Google ScholarGoogle Scholar
  47. Jessica Zosa Forde, A. Feder Cooper, Kweku Kwegyir-Aggrey, Chris De Sa, and Michael L. Littman. 2021. Model Selection’s Disparate Impact in Real-World Deep Learning Applications. https://arxiv.org/abs/2104.00606Google ScholarGoogle Scholar
  48. Alex A. Freitas. 2014. Comprehensible Classification Models: A Position Paper. SIGKDD Explor. Newsl. 15, 1 (March 2014), 1–10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Batya Friedman and Helen Nissenbaum. 1996. Bias in Computer Systems. ACM Trans. Inf. Syst. 14, 3 (July 1996), 330–347.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Jeanne C Fromer. 2019. Machines as the new Oompa-Loompas: trade secrecy, the cloud, machine learning, and automation. NYU L. Rev. 94(2019), 706.Google ScholarGoogle Scholar
  51. Harold Garfinkel. 1984. Studies in Ethnomethodology. Polity Press, Cambridge, UK.Google ScholarGoogle Scholar
  52. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. 2021. Datasheets for datasets. Commun. ACM 64, 12 (2021), 86–92.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In ICLR (Poster). 11 pages.Google ScholarGoogle Scholar
  54. Ronan Hamon, Henrik Junklewitz, Gianclaudio Malgieri, Paul De Hert, Laurent Beslay, and Ignacio Sanchez. 2021. Impossible Explanations? Beyond explainable AI in the GDPR from a COVID-19 use case scenario. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM, New York, NY, USA, 549–559.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Moritz Hardt, Eric Price, Eric Price, and Nati Srebro. 2016. Equality of Opportunity in Supervised Learning. In Advances in Neural Information Processing Systems, D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (Eds.). Vol. 29. Curran Associates, Inc., Red Hook, NY, USA.Google ScholarGoogle Scholar
  56. Trevor Hastie, Robert Tibshirani, and Jerome Friedman. 2009. The Elements of Statistical Learning: Data Mining, Inference and Prediction (2 ed.). Springer, USA.Google ScholarGoogle Scholar
  57. Deborah Hellman. 2021. Big Data and Compounding Injustice., 18 pages. Forthcoming, SSRN preprint.Google ScholarGoogle Scholar
  58. Kenneth Einar Himma. 2009. Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent?Ethics and Information Technology 11, 1 (2009), 19–29.Google ScholarGoogle Scholar
  59. Wei Hu, Zhiyuan Li, and Dingli Yu. 2020. Understanding Generalization of Deep Neural Networks Trained with Noisy Labels. In ICLR. 13 pages.Google ScholarGoogle Scholar
  60. Ben Hutchinson, Andrew Smart, Alex Hanna, Emily Denton, Christina Greer, Oddur Kjartansson, Parker Barnes, and Margaret Mitchell. 2021. Towards accountability for machine learning datasets: Practices from software engineering and infrastructure. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM, New York, NY, USA, 560–575.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Matthias Jarke. 1998. Requirements Tracing. Commun. ACM 41, 12 (Dec. 1998), 32–36.Google ScholarGoogle Scholar
  62. Muhammad Atif Javed and Uwe Zdun. 2014. A systematic literature review of traceability approaches between software architecture and source code. In Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering - EASE ’14. ACM Press, London, England, United Kingdom, 1–10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Severin Kacianka and Alexander Pretschner. 2021. Designing Accountable Systems. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 424–437.Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Nathan Kallus and Angela Zhou. 2018. Residual Unfairness in Fair Machine Learning from Prejudiced Data. arxiv:1806.02887Google ScholarGoogle Scholar
  65. Margot E. Kaminski. 2019. Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability. S. Cal. L. Rev. 92(2019), 89 pages. Issue 1529.Google ScholarGoogle Scholar
  66. Sunny Seon Kang. 2020. Algorithmic accountability in public administration: The GDPR paradox. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, New York, NY, USA, 32–32.Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Been Kim and Finale Doshi-Velez. 2021. Machine Learning Techniques for Accountability. AI Magazine 42, 1 (April 2021), 47–52.Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Alexandra Kleeman. 2016. Cooking with Chef Watson, I.B.M.’s Artificial-Intelligence App. The New Yorker (20 Nov. 2016).Google ScholarGoogle Scholar
  69. Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. 2018. Human Decisions and Machine Predictions. The Quarterly Journal of Economics 133, 1 (2018), 237–293.Google ScholarGoogle Scholar
  70. Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton Earnshaw, Imran Haque, Sara M Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang. 2021. WILDS: A Benchmark of in-the-Wild Distribution Shifts. In Proceedings of the 38th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 139), Marina Meila and Tong Zhang (Eds.). PMLR, 5637–5664.Google ScholarGoogle Scholar
  71. Nitin Kohli, Renata Barreto, and Joshua A Kroll. 2018. Translation tutorial: A shared lexicon for research and practice in human-centered software systems. In 1st Conference on Fairness, Accountability, and Transparency, Vol. 7. ACM, New York, NY, USA, 7. Tutorial.Google ScholarGoogle Scholar
  72. Jeff Kosseff. 2022. A User’s Guide to Section 230, and a Legislator’s Guide to Amending It (or Not). Berkeley Technology Law Journal 37 (2022), 40 pages. Issue 2.Google ScholarGoogle Scholar
  73. Sarah Kreps, R. Miles McCain, and Miles Brundage. 2020. All the News That’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation. Journal of Experimental Political Science(2020), 1–14.Google ScholarGoogle Scholar
  74. Joshua A. Kroll. 2021. Outlining Traceability: A Principle for Operationalizing Accountability in Computing Systems. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 758–771.Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Joshua A. Kroll, Joanna Huey, Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson, and Harlan Yu. 2017. Accountable Algorithms. University of Pennsylvania Law Review 165 (2017), 633–705. Issue 633.Google ScholarGoogle Scholar
  76. Sarah Lambdan. 2019. When Westlaw Fuels ICE Surveillance: Legal Ethics in the Era of Big Data Policing. N.Y.U. Review of Law and Social Change 43 (2019), 255–293. Issue 2.Google ScholarGoogle Scholar
  77. David Lehr and Paul Ohm. 2017. Playing with the Data: What Legal Scholars Should Learn About Machine Learning. U.C. Davis Law Review 51(2017), 653–717.Google ScholarGoogle Scholar
  78. Mark A. Lemley and Bryan Casey. 2019. Remedies for Robots. The University of Chicago Law Review 86, 5 (2019), 1311–1396.Google ScholarGoogle Scholar
  79. Karen EC Levy. 2014. Intimate Surveillance. Idaho L. Rev. 51(2014), 679.Google ScholarGoogle Scholar
  80. Karen EC Levy and David Merritt Johns. 2016. When open data is a Trojan Horse: The weaponization of transparency in science and governance. Big Data & Society 3, 1 (2016), 6 pages.Google ScholarGoogle Scholar
  81. Charles Lovering, Rohan Jha, Tal Linzen, and Ellie Pavlick. 2021. Predicting Inductive Biases of Pre-Trained Models. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  82. Donald MacKenzie. 2001. Mechanizing Proof: Computing, Risk, and Trust. MIT Press, Cambridge, MA, USA.Google ScholarGoogle ScholarCross RefCross Ref
  83. Peter Mattson, Vijay Janapa Reddi, Christine Cheng, Cody Coleman, Greg Diamos, David Kanter, Paulius Micikevicius, David Patterson, Guenther Schmuelling, Hanlin Tang, Gu-Yeon Wei, and Carole-Jean Wu. 2020. MLPerf: An Industry Standard Benchmark Suite for Machine Learning Performance. IEEE Micro 40, 2 (2020), 8–16. https://doi.org/10.1109/MM.2020.2974843Google ScholarGoogle ScholarCross RefCross Ref
  84. Angelina McMillan-Major, Salomey Osei, Juan Diego Rodriguez, Pawan Sasanka Ammanamanchi, Sebastian Gehrmann, and Yacine Jernite. 2021. Reusable Templates and Guides For Documenting Datasets and Models for Natural Language Processing and Generation: A Case Study of the HuggingFace and GEM Data and Model Cards. ArXiv preprint.Google ScholarGoogle Scholar
  85. Alexander Meinke and Matthias Hein. 2019. Towards neural networks that provably know when they don’t know. ArXiv preprint.Google ScholarGoogle Scholar
  86. Jacob Metcalf, Emanuel Moss, Ranjit Singh, Emnet Tafese, and Elizabeth Anne Watkins. 2022. A relationship and not a thing: A relational approach to algorithmic accountability and assessment documentation. https://doi.org/10.48550/ARXIV.2203.01455Google ScholarGoogle ScholarCross RefCross Ref
  87. Jacob Metcalf, Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, and Madeleine Clare Elish. 2021. Algorithmic impact assessments and accountability: The co-construction of impacts. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM, New York, NY, USA, 735–746.Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, New York, NY, USA, 220–229.Google ScholarGoogle ScholarDigital LibraryDigital Library
  89. Pegah Moradi and Karen Levy. 2020. The Future of Work in the Age of AI: Displacement or Risk-Shifting?Oxford Handbook of Ethics of AI(2020), 271–287.Google ScholarGoogle Scholar
  90. Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, Madeleine Clare Elish, and Jacob Metcalf. 2021. Assembling Accountability: Algorithmic Impact Assessment for the Public Interest. SSRN preprint.Google ScholarGoogle Scholar
  91. D.K. Mulligan and K.A. Bamberger. 2018. Saving governance-by-design. California Law Review 106 (June 2018), 697–784.Google ScholarGoogle Scholar
  92. Deirdre K. Mulligan and Kenneth A. Bamberger. 2019. Procurement As Policy: Administrative Process for Machine Learning. Berkeley Technology Law Journal 34 (4 Oct. 2019), 771–858.Google ScholarGoogle Scholar
  93. Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. StereoSet: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Online, 5356–5371.Google ScholarGoogle Scholar
  94. Behnam Neyshabur, Srinadh Bhojanapalli, David Mcallester, and Nati Srebro. 2017. Exploring Generalization in Deep Learning. In Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Vol. 30. Curran Associates, Inc., Red Hook, NY, USA.Google ScholarGoogle Scholar
  95. Helen Nissenbaum. 1996. Accountability in a computerized society. Science and engineering ethics 2, 1 (1996), 25–42.Google ScholarGoogle Scholar
  96. Ngozi Okidebe. 2022. Discredited Data. Forthcoming, Cornell Law Review, Vol. 107, shared privately with the authors.Google ScholarGoogle Scholar
  97. Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D. Sculley, Sebastian Nowozin, Joshua V. Dillon, Balaji Lakshminarayanan, and Jasper Snoek. 2019. Can You Trust Your Model’s Uncertainty? Evaluating Predictive Uncertainty under Dataset Shift. In Proceedings of the 33rd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, Article 1254, 12 pages.Google ScholarGoogle ScholarDigital LibraryDigital Library
  98. Nicolas Papernot, Patrick D. McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. 2016. The Limitations of Deep Learning in Adversarial Settings. In 1st IEEE European Symposium on Security & Privacy. IEEE, 16 pages.Google ScholarGoogle Scholar
  99. Samir Passi and Solon Barocas. 2019. Problem Formulation and Fairness. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* ’19). Association for Computing Machinery, New York, NY, USA, 39–48.Google ScholarGoogle ScholarDigital LibraryDigital Library
  100. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Eds.). Curran Associates, Inc., Red Hook, NY, USA, 8024–8035.Google ScholarGoogle ScholarDigital LibraryDigital Library
  101. Alisha Pradhan, Leah Findlater, and Amanda Lazar. 2019. “Phantom Friend” or “Just a Box with Information” Personification and Ontological Categorization of Smart Speaker-based Voice Assistants by Older Adults. In Proceedings of the ACM on Human-Computer Interaction, Vol. 3. ACM, New York, NY, USA, 1–21.Google ScholarGoogle Scholar
  102. Edward Raff. 2019. A Step toward Quantifying Independently Reproducible Machine Learning Research. In Proceedings of the 33rd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, Article 492, 11 pages.Google ScholarGoogle ScholarDigital LibraryDigital Library
  103. Inioluwa Deborah Raji, Andrew Smart, Rebecca N White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes. 2020. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, New York, NY, USA, 33–44.Google ScholarGoogle ScholarDigital LibraryDigital Library
  104. Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. 2019. Do ImageNet Classifiers Generalize to ImageNet?. In Proceedings of the 36th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 97), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, 5389–5400.Google ScholarGoogle Scholar
  105. Jathan Sadowski, Salomé Viljoen, and Meredith Whittaker. 2021. Everyone should decide how their digital data are used — Not just tech companies.Google ScholarGoogle Scholar
  106. Thomas Scanlon. 2000. What We Owe to Each Other. Belknap Press, Cambridge, MA, USA.Google ScholarGoogle ScholarCross RefCross Ref
  107. Markus Schlosser. 2019. Agency. In The Stanford Encyclopedia of Philosophy (Winter 2019 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University, USA.Google ScholarGoogle Scholar
  108. Andrew D Selbst. 2021. An Institutional View Of Algorithmic Impact Assessments. Harvard Journal of Law & Technology 35 (2021), 75 pages. Issue 117.Google ScholarGoogle Scholar
  109. Andrew D. Selbst, danah boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* ’19). Association for Computing Machinery, New York, NY, USA, 59–68.Google ScholarGoogle ScholarDigital LibraryDigital Library
  110. Hetan Shah. 2018. Algorithmic Accountability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, 2128 (2018), 6 pages.Google ScholarGoogle ScholarCross RefCross Ref
  111. Catherine M Sharkey. 2016. Can Data Breach Claims Survive the Economic Loss Rule. DePaul Law Review 66(2016), 339.Google ScholarGoogle Scholar
  112. Hong Shen, Wesley H Deng, Aditi Chattopadhyay, Zhiwei Steven Wu, Xu Wang, and Haiyi Zhu. 2021. Value Cards: An Educational Toolkit for Teaching Social Impacts of Machine Learning through Deliberation. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM, New York, NY, USA, 850–861.Google ScholarGoogle ScholarDigital LibraryDigital Library
  113. David Shoemaker. 2011. Attributability, answerability, and accountability: Toward a wider theory of moral responsibility. Ethics 121, 3 (2011), 602–632.Google ScholarGoogle ScholarCross RefCross Ref
  114. Prabhu Teja Sivaprasad, Florian Mai, Thijs Vogels, Martin Jaggi, and François Fleuret. 2020. Optimizer Benchmarking Needs to Account for Hyperparameter Tuning. In Proceedings of the 37th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 119), Hal Daumé III and Aarti Singh (Eds.). PMLR, 9036–9045.Google ScholarGoogle Scholar
  115. Mona Sloane, Emanuel Moss, Olaitan Awomolo, and Laura Forlano. 2020. Participation is not a Design Fix for Machine Learning. ArXiv preprint.Google ScholarGoogle Scholar
  116. Mona Sloane, Emanuel Moss, and Rumman Chowdhury. 2021. A Silicon Valley Love Triangle: Hiring Algorithms, Pseudo-Science, and the Quest for Auditability. ArXiv preprint.Google ScholarGoogle Scholar
  117. Brian Cantwell Smith. 1985. The Limits of Correctness. SIGCAS Comput. Soc. 14,15, 1,2,3,4 (Jan. 1985), 18–26.Google ScholarGoogle Scholar
  118. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research 15, 56 (2014), 1929–1958.Google ScholarGoogle ScholarDigital LibraryDigital Library
  119. Luke Stark and Jevan Hutson. 2021. Physiognomic Artificial Intelligence. SSRN preprint.Google ScholarGoogle Scholar
  120. Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating Gender Bias in Natural Language Processing: Literature Review. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 1630–1640.Google ScholarGoogle ScholarCross RefCross Ref
  121. Harry Surden and Mary-Anne Williams. 2016. Technological Opacity, Predictability, and Self-Driving Cars. Cardozo Law Review 38(2016), 121–181.Google ScholarGoogle Scholar
  122. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In ICLR (Poster). 10 pages.Google ScholarGoogle Scholar
  123. Matthew Talbert. 2019. Moral Responsibility. In The Stanford Encyclopedia of Philosophy (Winter 2019 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University, USA.Google ScholarGoogle Scholar
  124. Piotr Tereszkiewicz. 2018. Digital platforms: regulation and liability in the EU law. European Review of Private Law 26, 6 (2018), 18 pages.Google ScholarGoogle ScholarCross RefCross Ref
  125. Sherry Turkle. 2005. The second self: Computers and the human spirit. MIT Press, Cambridge, MA, USA.Google ScholarGoogle Scholar
  126. U.S. Government Accountability Office. 2021. Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities., 112 pages. https://www.gao.gov/assets/gao-21-519sp.pdfGoogle ScholarGoogle Scholar
  127. Kees‐Jan van Dorp. 2002. Tracking and tracing: a structure for development and contemporary practices. Logistics Information Management 15, 1 (March 2002), 24–33.Google ScholarGoogle ScholarCross RefCross Ref
  128. Briana Vecchione, Karen Levy, and Solon Barocas. 2021. Algorithmic Auditing and Social Justice: Lessons from the History of Audit Studies. In Equity and Access in Algorithms, Mechanisms, and Optimization. ACM, New York, NY, USA, 1–9.Google ScholarGoogle Scholar
  129. Carissa Véliz. 2021. Moral zombies: why algorithms are not moral agents. AI & SOCIETY 36(2021), 11 pages.Google ScholarGoogle Scholar
  130. Salomé Viljoen. 2021. A Relational Theory of Data Governance. Yale Law Journal 131, 2 (2021), 82 pages.Google ScholarGoogle Scholar
  131. Sandra Wachter and Brent Mittelstadt. 2019. A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI. Columbia Business Law Review 2 (2019), 130 pages.Google ScholarGoogle Scholar
  132. Sandra Wachter, Brent Mittelstadt, and Luciano Floridi. 2017. Transparent, explainable, and accountable AI for robotics. Science (Robotics) 2, 6 (2017).Google ScholarGoogle Scholar
  133. Ari Ezra Waldman. 2019. Power, Process, and Automated Decision-Making. Fordham Law Review 88(2019), 21 pages.Google ScholarGoogle Scholar
  134. Yilun Wang and Michal Kosinski. 2018. Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of Personality and Social Psychology 114 (02 2018), 246–257.Google ScholarGoogle Scholar
  135. Gary Watson. 1996. Two Faces of Responsibility. Philosophical Topics 24, 2 (1996), 227–248.Google ScholarGoogle ScholarCross RefCross Ref
  136. Jan Whittington, Ryan Calo, Mike Simon, Jesse Woo, Meg Young, and Peter Schmiedeskamp. 2015. Push, pull, and spill: A transdisciplinary case study in municipal open government. Berkeley Technology Law Journal 30, 3 (2015), 1899–1966.Google ScholarGoogle Scholar
  137. Maranke Wieringa. 2020. What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability. In Proceedings of the 2020 conference on fairness, accountability, and transparency. ACM, New York, NY, USA, 1–18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  138. Xiaolin Wu and Xi Zhang. 2016. Automated Inference on Criminality using Face Images. ArXiv preprint.Google ScholarGoogle Scholar
  139. Yichen Yang and Martin Rinard. 2019. Correctness Verification of Neural Networks. NeurIPS 2019 Workshop on Machine Learning with Guarantees.Google ScholarGoogle Scholar
  140. Meg Young, Luke Rodriguez, Emily Keller, Feiyang Sun, Boyang Sa, Jan Whittington, and Bill Howe. 2019. Beyond open vs. closed: Balancing individual privacy and public accountability in data sharing. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, New York, NY, USA, 191–200.Google ScholarGoogle ScholarDigital LibraryDigital Library
  141. Du Zhang and Jeffrey JP Tsai. 2003. Machine learning and software engineering. Software Quality Journal 11, 2 (2003), 87–119.Google ScholarGoogle ScholarDigital LibraryDigital Library
  142. Ruqi Zhang, A. Feder Cooper, and Christopher M De Sa. 2020. Asymptotically Optimal Exact Minibatch Metropolis-Hastings. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (Eds.). Vol. 33. Curran Associates, Inc., Red Hook, NY, USA, 19500–19510.Google ScholarGoogle Scholar

Index Terms

  1. Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          FAccT '22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency
          June 2022
          2351 pages
          ISBN:9781450393522
          DOI:10.1145/3531146

          Copyright © 2022 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 20 June 2022

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed limited

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format