Abstract
This research paper examines the intersection of actor-network theory (ANT) and algorithms in the context of artificial intelligence (AI). ANT is a sociological approach that highlights the role of both human and non-human actors in social networks, while algorithms are sets of instructions used to perform specific tasks. The paper argues that the integration of these two concepts, program and translation, can reveal new power dynamics in the age of AI and highlights the importance of understanding these relationships in designing ethical and just AI systems. The paper provides an overview of ANT and algorithms, and analyzes their intersection in the context of AI, particularly regarding the platform of Chat GTP. The paper concludes by proposing a framework for understanding and addressing the power dynamics in AI systems.
Similar content being viewed by others
Notes
This comment was made by doctor Sofia Trejo Abad during a PhD colloquium of the author.
References
Baidoo-Anu, D., Owusu Ansah, L.: Education in the era of generative artificial intelligence (AI): understanding the potential benefits of ChatGPT in promoting teaching and learning. Available at SSRN 4337484 (2023)
Bender, E.M., Gebru, T., McMillan-Major, A., Mitchell, M.: On the dangers of stochastic parrots: can language models be too big? [Paper presentation]. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, Canada (2021). https://doi.org/10.1145/3442188.3445922
Brown, T.B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Agarwal, S., et al.: Language models are few-shot learners. Advances in Neural Information Processing Systems, p. 33 (2020)
Budzianowski, P., Vulić, I.: Hello, it's GPT-2—how can I help you? Towards the use of pretrained language models for task-oriented dialogue systems (2019). arXiv:1907.05774
Budzianowski, P., Vulić, I.: Hello, it’s GPT-2—how can I help you? Towards the use of pretrained language models for task-oriented dialogue systems. In: Proceedings of the 3rd Workshop on Neural Generation and Translation, pp. 15–22. Association for Computational Linguistics (2019)
ChatGPT 2022: OpenAI. (2023). About. Openai.com. https://openai.com/about
Chowdhary, K.: Natural language processing. In: Fundamentals of Artificial Intelligence, pp. 603–649. Springer (2020)
Chowdhury, N.A., Rahman, S.: A brief review of ChatGPT: Limitations, challenges and ethical-social implications. ResearchGate (2023). https://www.researchgate.net/publication/368397881_A_brief_review_of_ChatGPT_Limitations_Challenges_and_Ethical-Social_Implications
Dale, R.: GPT-3: what’s it good for? Nat. Lang. Eng. 27(1), 113–118 (2021). https://doi.org/10.1017/S1351324920000601
Dale, R.: GPT-3: what’s it good for? Nat. Lang. Eng. 27(1), 113–118 (2021). https://doi.org/10.1017/S1351324920000601
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding (2018). arXiv:1810.04805
Erhan, D., Bengio, Y., Courville, A., Manzagol, P., Vincent, P.: Why does unsupervised pre-training help deep learning. J. Mach. Learn. Res. 11, 625–660 (2010)
European Commission.: Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (2021). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206
European Parliament.: AI Act: a step closer to the first rules on artificial intelligence (2023). https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence
Floridi, L., Chiriatti, M.: GPT-3: its nature, scope, limits, and consequences. Mind. Mach. 30(4), 681–694 (2020). https://doi.org/10.1007/s11023-020-09548-
Fritz, A., Brandt, W., Gimpel, H., Bayer, S.: Moral agency without responsibility? Analysis of three ethical models of human-computer interaction in times of artificial intelligence (AI). In: De Ethica, vol. 6, Issue 1, pp. 3–22. Linkoping University Electronic Press (2020). https://doi.org/10.3384/de-ethica.2001-8819.20613
Gozalo-Brizuela, R., Garrido-Merchán, E.C.: ChatGPT is not all you need. A state of the art review of large generative AI models (2023). https://arxiv.org/abs/2301.04655
Hosseini, M., Rasmussen, L.M., Resnik, D.B.: Using AI to write scholarly publications. Acc. Res. (2023). https://doi.org/10.1080/08989621.2023.2168535
Hu, L.: Generative AI and future (2023). https://pub.towardsai.net/generative-ai-and-future-c3b1695876f2
Hutchinson, B., Smart, A., Hanna, A., Denton, E., Greer, C., Kjartansson, O., Barnes, P., Mitchell, M.: Towards accountability for machine learning datasets: practices from software engineering and infrastructure. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, Canada (2021). https://doi.org/10.1145/3442188.3445918
Iskanderov, Y., Pautov, M.: Agents and multi-agent systems as actor-networks. In: Proceedings of the 12th International Conference on Agents and Artificial Intelligence. 12th International Conference on Agents and Artificial Intelligence. SCITEPRESS—Science and Technology Publications (2020). https://doi.org/10.5220/0008935601790184
Johnson, M., Schuster, M., Le, Q.V., Krikun, M., Wu, Y., Chen, Z., Thorat, N., Viégas, F., Wattenberg, M., Corrado, G., Hughes, M., Dean, J.: Google’s multilingual neural machine translation system: enabling zero-shot translation. Trans. Assoc. Comput. Linguist. 5, 339–351 (2017). https://doi.org/10.1162/tacl_a_00065
Jovanović, M.: Generative artificial intelligence: trends and prospects (2023). https://www.computer.org/csdl/magazine/co/2022/10/09903869/1H0G6xvtREk
Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., Stadler, M., Kasneci, G., et al.: ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ. 103, 102274 (2023). https://doi.org/10.1016/j.lindif.2023.102274
Latour, B.: Technology is society made durable. Sociol. Rev. 38(1_suppl), 103–131 (1990). https://doi.org/10.1111/j.1467-954x.1990.tb03350.x
Latour, B.: On actor-network theory. A few clarifications, plus more than a few complications. In: Philosophical Literary Journal Logos, vol. 27, Issue 1, pp. 173–197. The Russian Presidential Academy of National Economy and Public Administration (2017). https://doi.org/10.22394/0869-5377-2017-1-173-197
Latour, B., Aúz, T.F.: La esperanza de Pandora: ensayos sobre la realidad de los estudios de la ciencia. Gedisa (2001)
Latour, C., Latour, C. D. S. D. L. B., de Sociologie de L’Innovation Bruno Latour, C., Sheridan, A., Law, J.: The Pasteurization of France. Amsterdam University Press (1988)
Liu, Y., Han, T., Ma, S., Zhang, J., Yang, Y., Tian, J., He, H., Li, A., He, M., Liu, Z., Wu, Z., Zhu, D., Li, X., Qiang, N., Shen, D., Liu, T., Ge, B.: Summary of ChatGPT/GPT-4 research and perspective towards the future of large language models (2023). arXiv:2304.01852
Lund, B.D., Wang, T.: Chatting about ChatGPT: how may AI and GPT impact academia and libraries? Library Hi Tech News (2023). https://doi.org/10.1108/LHTN-01-2023-0009
Lund, B.D., Wang, T., Mannuru, N.R., Nie, B., Shimray, S., Wang, Z.: ChatGPT and a new academic reality: AI-written research papers and the ethics of the large language models in scholarly publishing. JASIS&T (2023). https://doi.org/10.1002/asi.24750
Lund, K., Ostermann, S., Fischer, F., Pinkwart, N., Scheuer, O.: ChatGPT for language learning: a pedagogical framework and empirical evaluation. Comput. Educ. 173, 104212 (2023)
Lutz, C., Tamó-Larrieux, A.: The robot privacy paradox: understanding how privacy concerns shape intentions to use social robots. In: Human-Machine Communication, vol. 1, pp. 87–111. Nicholson School of Communication, UCF (2020). https://doi.org/10.30658/hmc.1.6
Morton Gutiérrez, J.L.: Replika y la compañía de la inteligencia artificial emocional: Los retos éticos y sociales de los chatbots de compañía [Replika and the emotional artificial intelligence company: The ethical and social challenges of company chatbots]. VISUAL REVIEW: International Visual Culture Review/Revista Internacional de Cultura Visual, 9(0) (2022). https://dialnet.unirioja.es/servlet/articulo?codigo=8664773
OpenAI.: GPT-4 Technical Report (2023). arXiv:2303.08774
O’Neil, C.: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (1.a ed., Vol. 1). Crown Publishing Group (2016)
Royal Society.: Machine Learning. Royal Society (2017)
Saeed Al-Mushayt, O.: Automating E-government services with artificial intelligence. IEEE Access 7, 21–29 (2019). https://doi.org/10.1109/access.2019.2946204
Scott, K. Microsoft teams up with OpenAI to exclusively license GPT-3 language model. Official Microsoft Blog (2020)
Stokel-Walker, C., Van Noorden, R.: What ChatGPT and generative AI mean for science. Nature 614(7947), 214–216 (2023)
Van Dijck, J., Poell, T., De Waal, M.: The platform society. Public values in a connective world. Oxford University Press (2018)
Véliz, C.: The future of privacy. In: Future Morality, pp. 121–129. Oxford University Press (2021). https://doi.org/10.1093/oso/9780198862086.003.0012
Véliz, C.: Moral zombies: why algorithms are not moral agents. SpringerLink (2021). https://doi.org/10.1007/s00146-021-01189-x?error=cookies_not_supported&code=e3ea23e3-d9c2-4395-8ddb-f9a1a6f98a6c
Warren, T.: You can play with Microsoft’s Bing GPT-4 chatbot right now, no waitlist required. The Verge (2023). https://www.theverge.com/2023/3/15/23641683/microsoft-bing-ai-gpt-4-chatbot-available-no-waitlist
Emerographic References
Brikmann, M.: Google Bard gives wrong answer during Google's presentation (2023). https://www.ghacks.net/2023/02/09/google-bard-gives-wrong-answer-during-googles-presentation/
Issa, S.H.: Uber and Lyft drivers are using the companies’ algorithms against them. NBC News (2019). https://www.nbcnews.com/think/opinion/uber-lyft-drivers-are-using-companies-algorithms-against-them-ncna1009026
Goodin, D.: Hackers are selling a service that bypasses ChatGPT restrictions on Telegram. Ars Technica (2023). https://arstechnica.com/information-technology/2023/02/now-open-fee-based-telegram-service-that-uses-chatgpt-to-generate-malware/
Maguey, H.: ¿Cuál es el límite en la inteligencia artificial? Gaceta UNAM (2023). https://www.gaceta.unam.mx/cual-es-el-limite-en-la-inteligencia-artificial/
Mehdi, Y.: Confirmed: the new Bing runs on OpenAI’s GPT-4 [Blog post]. Bing Search Blog (2023). https://blogs.microsoft.com/blog/2023/05/04/announcing-the-next-wave-of-ai-innovation-with-microsoft-bing-and-edge/
Krieger, D. J., Belliger, A.: Interpreting Networks. transcript Verlag (2014)
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflicts of interest related to the work presented in this paper. The authors have no financial or non-financial interests that could have influenced the design, execution, analysis or interpretation of the study. And finally, the authors declare that they have no personal, political, religious or academic affiliations that could have affected their objectivity or integrity. The authors have no involvement in any legal action related to the paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Gutiérrez, J.L.M. On actor-network theory and algorithms: ChatGPT and the new power relationships in the age of AI. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00314-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s43681-023-00314-4