ABSTRACT
Software introduces new kinds of agents: artificial software agents (ASA), including, for example, driverless trains and cars. To create these devices responsibility, engineers need an ethics of software agency. However, this pragmatic professional need for guidance and regulation conflicts with the weakness of moral science. We do not know much about how ethics informs interactions with artificial agents. Most importantly, we don't know how people will regard ASA as agents: their agents (strictly speaking) and also their competitive and cooperative partners. Naturally, we want to deal with these new problems with our old ethical tools, but this conservative strategy may not work, and if not, may lead to catastrophic failure to anticipate the emerging moral landscape. (Just ask the creators of genetically modified foods.)
1. This lecture will look at the box or frame of traditional ethics and some ways to use experimental data to get outside it. The lecture uses some quick and nasty clicker experiments to point us to disturbing evidence from recent cognitive moral psychology about the form and content of our ethical apparatus (Haidt 2012) and its universality (Mikhail 2007). Then we turn to some new evidence on the ethics of human-ASA interaction. We focus on three surprising features of human-ASA interaction that disturb received ethical paradigms: 1) Overactive deontology: the tendency to seek out a culprit to blame, even it it's the victim. 2) Utopian consequentialism: denying the constraints of acting in the imperfect real world by shifting to wishful perfectionism. 3) Embracing mechanical exploitation: accepting worse behavior from a program than one would accept from a person in Ultimatum Game experiments.
2. Next, we show how an experimental, cognitive and game theoretic approach to ethics can situate and explain these problems. We play some games based on policy decisions for the emerging technology of driverless cars that remind us of the strategic dimension of ethics. We also examine weak experimental evidence that engineers think differently about ethics and technology from other moral tribes or types.
3. However we argue that theory cannot solve our ethical problems. Neither ethical nor game theory has resources powerful enough to discover and hopefully to bridge our moralized divisions. For these formidable, scientific and political (respectively) tasks we need new empirical methods. We offer two examples from our current research program: 1) Anonymous input of moral and value data: clickers for face-to-face interaction. 2) Democratic scale deliberation: N-Reasons web based experimental prototype. Both of these methods challenge our research ethics, which experimental ethics shares with experimental software engineering.
As some of the data discussed in the lecture comes from the Robot Ethics survey, you will be better informed and represented if you visit http://your-views.org/D7/Robot_Ethics_Welcome. The "class" for the conference is "CompArch".
- P. Danielson. Designing a machine to learn about the ethics of robotics: the n-reasons platform. Ethics and Information Technology, 12(3):251--261, 2010. Google ScholarDigital Library
- P. Danielson. Engaging the public in the ethics of robots for war and peace. Philosophy & Technology, 24:239--249, 2011.Google ScholarCross Ref
- P. A. Danielson. N-reasons: Computer mediated ethical decision support for public participation. In E. Einsiedel and K. O'Doherty, editors, Publics & Emerging Technologies: Cultures, Contexts, and Challenges, chapter 14. UBC Press, Vancouver, 2013.Google Scholar
- J. Haidt. The righteous mind: why good people are divided by politics and religion. Pantheon Books, 2012.Google Scholar
- G. Marcus. Moral machines. http://www.newyorker.com/online/blogs/newsdesk/2012/11/google-driverless-car-morality.html, 2012. The New Yorker, Nov 27, 2012.Google Scholar
- J. Mikhail. Universal moral grammar: theory, evidence and the future. Trends in Cognitive Sciences, (4):143--152, 2007.Google Scholar
- A. Moon, P. A. Danielson, and H. F. M. Van der Loos. Survey-based discussions on morally contentious applications of interactive robotics. International Journal of Social Robotics, pages 1--20, 2012.Google Scholar
- D. A. Norman. The design of future things. Basic Books, New York, 2007.Google Scholar
- J. Singer and N. G. Vinson. Ethical issues in empirical studies of software engineering. IEEE Transactions on Software Engineering, 28(12):1171--1180, 2002. Google ScholarDigital Library
- E. Thulin and P. Danielson. Quantifying qualitative responses to the trolley problem and side-effect effect with the n-reasons platform. submitted, 2013.Google Scholar
Index Terms
- Ethics outside the box: empirical tools for an ethics of artificial agents
Recommendations
From ethics washing to ethics bashing: a view on tech ethics from within moral philosophy
FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and TransparencyThe word 'ethics' is under siege in technology policy circles. Weaponized in support of deregulation, self-regulation or handsoff governance, "ethics" is increasingly identified with technology companies' self-regulatory efforts and with shallow ...
Subjectivity and information ethics
In “A Brief History of Information Ethics,” Thomas Froehlich (2004) quickly surveyed under several broad categories some of the many issues that constitute information ethics: under the category of librarianship—censorship, privacy, access, balance in ...
Information, Ethics, and Computers: The Problem of Autonomous Moral Agents
In modern technical societies computers interact with human beings in ways that can affect moral rights and obligations. This has given rise to the question whether computers can act as autonomous moral agents. The answer to this question depends on ...
Comments