Abstract
Artificial Intelligence (AI) is a technology widely used to support human decision-making. Current areas of application include financial services, engineering, and management. A number of attempts to introduce AI decision support systems into areas which more obviously include moral judgement have been made. These include systems that give advice on patient care, on social benefit entitlement, and even ethical advice for medical professionals. Responding to these developments raises a complex set of moral questions. This paper proposes a clearer replacement question to them. The replacement question asks under what circumstances, if any, people would accept a moral judgement made by some sort of machine. Since, it is argued, the answer to this replacement question is positive, urgent practical moral problems are raised.
Similar content being viewed by others
Notes
It is obviously the case that real humans rarely make free and rational decisions about which moral judgements to accept. They exist in networks of authority, social expectations, and religion which effectively limit their choices. However, to discuss the question in such realistic terms from the outset would serve only to obscure the argument.
The fact that computers currently make these sorts of decisions should not, under any circumstances, be conflated with the claim that this is in any sense a desirable state of affairs. There are some serious problems with this sort of development, which unfortunately, lie outside the scope of this paper.
Actually, the apparent lack of prejudice and bias in computers is a consequence of the ‘logical myth’ mentioned in the last section. In practice computers embody and express the prejudices of their designers and this can sometimes be a serious problem.
References
Allen C, Varner G, Zinser J (2000) Prolegomena to any future artificial moral agent. J Exp Theor Artif Intell 12:251–261
Bostrom N (2003) Ethical issues in advanced artificial intelligence. In: Smit I, Wallach W, Lasker G (eds) Cognitive, emotive and ethical aspects of decision making in humans and in artificial intelligence, vol 2. IIAS, Windsor, pp 12–17
Browne J, Taylor A (1991) Realism, responsibility, and rationality: practical, legal, and political issues concerning the introduction of a large knowledge-based system in law. In: Bennun (ed) Computers, artificial intelligence and the law. Ellis Horwood, Chichester, pp 95–123
Damasio AR (1996) Descartes’ error: emotion, reason, and the human brain. Macmillan, London
Danielson P (1992) Artificial morality: virtuous robots for virtual games. Routledge, London
LaChat MR (1986) Artificial intelligence and ethics: an exercise in the moral imagination. AI Mag 7(2):70–79
Miller PL (1984) A critiquing approach to expert computer advice: ATTENDING, Pitman, London
Picard R (1998) Affective computing. MIT Press, Cambridge
Turing AM (1950) Computing machinery and intelligence. Mind, vol LIX, no. 236
Whitby B (1996) Reflections on artificial intelligence. The legal, moral, and ethical dimensions. Intellect Books, Exeter
Whitby B (2003) The myth of AI failure. Cognitive science research report 568. University of Sussex, Falmer
Yeats WB (1919) An Irish airman foresees his death (1917). In: The wild swans at Coole, and other poems. Macmillan, NY
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Whitby, B. Computing machinery and morality. AI & Soc 22, 551–563 (2008). https://doi.org/10.1007/s00146-007-0100-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-007-0100-y