Abstract
It is a pervasive feature of today’s life that we rely more and more on technology when making decisions. For example, we often “blindly” follow the instructions of navigation systems when driving. Letting the navigation system “take control” is precisely one of the main reasons to use such a technology in the first place because we usually do not have the time to determine the best route ourselves, especially given the current traffic situation. Moreover, we may even have developed a tendency to see ourselves as less responsible or even to shun responsibility altogether because of this lack of control, like when we say that it was not really us who made the decision but the navigation system. In this paper, I address this claim about our diminished or even lacking moral responsibility when relying on technology in our decision-making. For, if it could be shown that by relying on technology we, indeed, lose a morally relevant form of control, but that we are or should be held responsible for our decisions and their consequences nonetheless, this moral practice would include more and more cases of moral luck, i.e. we would be held morally responsible for things beyond our control. I propose to dub such instances technological moral luck and argue that the stronger we understand the underlying control principle for moral responsibility, the more we will have to accept that moral responsibility becomes a matter of moral luck if we still want to hold agents morally responsible when they rely on technology.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
(Köhler 2018).
- 2.
For an overview of relevant psychological effects, see (Cummings 2006).
- 3.
Accordingly, I will not be concerned with cases of self-driving cars here, where it has been especially argued that a “responsibility gap” occurs, since neither the driver nor the car may be considered morally responsible. While the driver as well as the designers and programmers apparently lack the kind of control necessary for moral responsibility, the self-driving car lacks the prerequisites of being considered a morally responsible agent in the first place. Consequently, there appears to be no one left to whom moral responsibility could be attributed. The same holds for sufficiently autonomous weapons systems, prominently coined “killer robots,” against which the issue is mostly discussed. See (Matthias 2004; Cummings 2006; Sparrow 2007; Hevelke and Nida-Rümelin 2015; Champagne and Tonkens 2015; Danaher 2016, 2019; Gunkel 2017; Santoni de Sio and van den Hoven 2018; Nyholm 2018; Jong 2019; Himmelreich 2019). For an overview of the debate, see (Noorman 2018).
- 4.
For an overview of the debate on moral luck, see (Nelkin 2019).
- 5.
To avoid a possible misunderstanding, the notion of technological moral luck is not meant to refer to an additional kind of moral luck, i.e. in addition to the four seminal kinds of moral luck introduced by Thomas Nagel, namely resultant, circumstantial, constitutive, and causal luck (cp. Nagel 1979, p. 28). The notion is merely meant to highlight technology’s influence on moral luck. For, technology may be the deciding factor 1) whether an agent achieves the intended result (resultant luck), e.g. warning someone in time by calling the person on the phone, 2) whether an agent ends up in a situation in the first place (circumstantial luck), e.g. when using a navigation system or following restaurant suggestions based on analyzed user data, 3) what kind of character an agent develops due to technology’s influence (constitutive luck), e.g. due to the use of social media in his or her upbringing, or taking a certain medication, or being subject to deep brain stimulation, both of which may substantially affect one’s personality, and 4) when it comes to a person’s autonomy in general (causal luck) in case the person has been born as a “designer baby,” at least if one follows the respective criticism. For a concise discussion of this last point, see (Beck forthcoming).
- 6.
For an overview of the notion of and debate on moral responsibility, see (Eshleman 2016).
- 7.
For an overview in this regard, see (Wilson and Shpall 2016, Sect. 1.1).
- 8.
For an overview of the debate on understanding action and attempts, see again (Wilson and Shpall 2016).
- 9.
See (Fischer and Ravizza 1998).
- 10.
Cp. (Fischer and Ravizza 1998, ch. 3).
- 11.
(Fischer and Ravizza 1998, p. 89).
- 12.
For the distinction between attributability and accountability, see prominently (Watson 1996). For the general distinction between causal responsibility, moral responsibility, and praise- and blameworthiness in the context of moral luck, see (Concepcion 2002). For the notion of agent regret, which highlights the agent’s causal and agential, albeit blameless, involvement in what happened and which is not the same as mere spectator regret, see (Baron 1988; Rorty 1980; Williams 1981, pp. 27–31; Kühler 2013, ch. 13).
- 13.
Another underlying issue should be noted at this point. For, the mentioned relation between being in fact responsible and being held responsible may be interpreted either as conceptual or as normative relation. While the former implies that holding someone responsible for something for which the person is in fact not responsible would be conceptually incoherent to begin with, the latter would allow such a practice as indeed conceptually possible but prima facie unfair. Hence, the latter view essentially adopts an ascriptivist notion of moral responsibility, following Peter F. Strawson’s seminal account. See (Strawson 1962) and also (Wallace 1994; McKenna and Russell 2008). I have argued elsewhere in favor of interpreting the control principle in this latter, ascriptivist fashion, i.e. as a normative principle of fairness, including its underlying principle “ought implies can” (cp. Kühler 2012, 2013, 2016).
- 14.
- 15.
Of course, such a comprehensive moral practice of holding persons accountable also comprises conditions of fairness, like (reasonably expectable) knowledge, foresight, care, avoidance of negligence, and the absence of manipulation or coercion, which may be put forward as justifications or excuses in order to limit an agent’s accountability.
- 16.
See, again, (Nelkin 2019, Sect. 1), who provides a general overview of the corresponding debate on moral luck.
- 17.
Cp. (Kindhäuser 2011, sect. IV.2.d; Morge 2015). See also (Santoni de Sio and van den Hoven 2018, pp. 8–11; Himmelreich 2019, p. 10) concerning similar tracing accounts of responsibility, according to which an agent may be held responsible for actions of autonomous weapon systems if the outcome in question can be traced back to a relevant decision or action of the agent.
- 18.
This is not to say that the strong version of the control principle is, therefore, wrong or untenable. However, I take it that we are rather hesitant to adopt it in its full force, given our current moral practice. Essentially, this amounts to the corresponding observation of defenders of moral luck, claiming that moral luck is a pervasive element of our moral practice, at least if one adopted the strong version of the control principle.
- 19.
The strong version of the control principle can, of course, be qualified analogously.
- 20.
No wonder, then, that much of the debate on “responsibility gaps” regarding autonomously acting technology, most notably weapons systems or cars, essentially comes down to referring, more or less implicitly, to different types of control as well as to different kinds of responsibility in order to close or highlight specific gaps. See, for instance, (Champagne and Tonkens 2015; Hevelke and Nida-Rümelin 2015; Danaher 2016; Nyholm 2018; Jong 2019; Santoni de Sio and van den Hoven, 2018; Himmelreich 2019).
- 21.
At most, it might be argued that it is fair to hold the driver morally accountable and even considering him or her blameworthy if the driver had shown attributable negligence with regard to the reliability of the car’s brakes beforehand. However, this still raises the question for what exactly the driver may be held morally accountable or considered blameworthy, namely only for the unreliable state of the brakes or also for bumping into the car in front. Assuming the strong version of the control principle, only the former appears to be warranted.
- 22.
Following such a line of thought, John Danaher has recently argued that relying more and more on technology may even put our moral agency as such in jeopardy. Cp. (Danaher 2019).
- 23.
In other words, the challenge to moral responsibility stems from a lack of computational transparency, i.e. how information is processed and results are reached. Moreover, even if the computation were transparent, it would need to be translated into reasons that can be critically reflected, yielding all the problems that such translations bring with them.
- 24.
However, even if there is some positive previous experience with the technology in question, consider more controversial cases, like a judge relying on a predictive algorithm when deciding about putting a child into foster care, instead of leaving him or her with the biological parents. If the judge cannot reflect on the algorithm’s suggestion in terms of good or bad reasons, following it arguably makes it a matter of “blindly” following or trusting it, i.e. trusting that the algorithm produces the intended result in that it is sufficiently functionally equivalent to a well-reasoned respective human decision. For a detailed discussion of this topic, see Thomas Grote’s contribution in this volume.
- 25.
Cp. (Fischer and Ravizza 1998, ch. 8).
- 26.
References
Ackeren, M. van., & Kühler, M. (Eds.). (2016). The limits of moral obligation. Moral demandingness and ought implies can. New York: Routledge.
Baron, M. (1988). Remorse and agent-regret. Midwest Studies in Philosophy, 13, 259–281.
Beck, B. (forthcoming). The ART of authenticity. In M. Kühler & V. Mitrović (Eds.), Theories of the self and autonomy in medical ethics. Dordrecht: Springer.
Champagne, M., & Tonkens, R. (2015). Bridging the responsibility gap in automated warfare. Philosophy and Technology, 28(1), 125–137. https://doi.org/10.1007/s13347-013-0138-3.
Concepcion, D. W. (2002). Moral luck, control, and the bases of desert. The Journal of Value Inquiry, 36, 455–461.
Cummings, M. L. (2006). Automation and accountability in decision support system interface design. Journal of Technology Studies, 32(1), 23–31.
Danaher, J. (2016). Robots, law and the retribution gap. Ethics and Information Technology, 18(4), 299–309. https://doi.org/10.1007/s10676-016-9403-3.
Danaher, J. (2019). The rise of the robots and the crisis of moral patiency. AI & SOCIETY, 34(1), 129–136. https://doi.org/10.1007/s00146-017-0773-9.
Eshleman, A. (2016). Moral responsibility. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Winter 2016). https://plato.stanford.edu/archives/win2016/entries/moral-responsibility/.
Fischer, J. M., & Ravizza, M. (1998). Responsibility and control: A theory of moral responsibility. Cambridge: Cambridge University Press.
Gunkel, D. J. (2017). Mind the gap: Responsible robotics and the problem of responsibility. Ethics and Information Technology. https://doi.org/10.1007/s10676-017-9428-2.
Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for crashes of autonomous vehicles: An ethical analysis. Science and Engineering Ethics, 21(3), 619–630. https://doi.org/10.1007/s11948-014-9565-5.
Himmelreich, J. (2019). Responsibility for killer robots. Ethical Theory and Moral Practice, 1–17. https://doi.org/10.1007/s10677-019-10007-9
Jong, R. de. (2019). The retribution-gap and responsibility-loci related to robots and automated technologies: A reply to Nyholm. Science and Engineering Ethics, 1–9. https://doi.org/10.1007/s11948-019-00120-4.
Kindhäuser, U. (2011). Handlung. In Enzyklopädie zur Rechtsphilosophie. http://www.enzyklopaedie-rechtsphilosophie.net/inhaltsverzeichnis/19-beitraege/106-handlung.
Köhler, B. (2018, February 27). ‘Nicht nach Navi fahren!’: Darum warnt die Polizei auf der A73 mit diesem Schild. Retrieved 10 August 2019, from InFranken.de website: https://www.infranken.de/regional/franken/warum-bei-ebersdorf-navifreie-zone-ist;art58454,3190022.
Kühler, M. (2012). ‘Resultant Moral Luck’, ‘Sollen impliziert Können’ und eine komplexe normative Analyse moralischer Verantwortlichkeit. Grazer Philosophische Studien, 86, 181–205.
Kühler, M. (2013). Sollen ohne Können? Über Sinn und Geltung nicht erfüllbarer Sollensansprüche. Münster: Mentis.
Kühler, M. (2016). Demanding the impossible: Conceptually misguided or merely unfair? In M. van Ackeren & M. Kühler (Eds.), The limits of moral obligation. Moral demandingness and ought implies can (pp. 116–130). New York: Routledge.
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.
McKenna, M., & Russell, P. (Eds.). (2008). Free will and reactive attitudes. Perspectives on P. F. Strawson’s ‘Freedom and Resentment’. Farnham: Ashgate.
Morge, S. (2015). Die actio libera in causa im Rahmen des § 21 StGB: Eine rechtsdogmatische Untersuchung unter besonderer Berücksichtigung der Fälle selbstverschuldeter Trunkenheit im Übrigen. Hamburg: Verlag Dr. Kovač.
Nagel, T. (1979). Moral luck. In T. Nagel, Mortal questions (pp. 24–38). Cambridge: Cambridge University Press.
Nelkin, D. K. (2019). Moral luck. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Summer 2019). https://plato.stanford.edu/archives/sum2019/entries/moral-luck/
Noorman, M. (2018). Computing and moral responsibility. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Spring 2018). https://plato.stanford.edu/archives/spr2018/entries/computing-responsibility/
Nyholm, S. (2018). Attributing agency to automated systems: Reflections on human-robot collaborations and responsibility-loci. Science and Engineering Ethics, 24(4), 1201–1219. https://doi.org/10.1007/s11948-017-9943-x.
Rorty, A. O. (1980). Agent regret. In A. O. Rorty (Ed.), Explaining emotions (pp. 489–506). Berkeley: University of California Press.
Santoni de Sio, F., & van den Hoven, J. (2018). Meaningful human control over autonomous systems: A philosophical account. Frontiers in Robotics and AI, 5(15). https://doi.org/10.3389/frobt.2018.00015.
Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77. https://doi.org/10.1111/j.1468-5930.2007.00346.x.
Strawson, P. F. (1962). Freedom and resentment. In G. Watson (Ed.), Free will (Second Edition) (Vol. 48, pp. 72–93). Oxford: Oxford University Press, 2003.
Wallace, R. J. (1994). Responsibility and the moral sentiments. Cambridge: Harvard University Press.
Watson, G. (1996). Two faces of responsibility. In G. Watson (Ed.), Agency and answerability (pp. 260–288). Oxford: Clarendon Press. 2004.
Williams, B. (1981). Moral luck. In B. Williams, Moral luck (pp. 20–39). Cambridge: Cambridge University Press.
Wilson, G., & Shpall, S. (2016). Action. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Winter 2016). https://plato.stanford.edu/archives/win2016/entries/action/
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer-Verlag GmbH Germany, part of Springer Nature
About this chapter
Cite this chapter
Kühler, M. (2020). Technological Moral Luck. In: Beck, B., Kühler, M. (eds) Technology, Anthropology, and Dimensions of Responsibility. Techno:Phil – Aktuelle Herausforderungen der Technikphilosophie , vol 1. J.B. Metzler, Stuttgart. https://doi.org/10.1007/978-3-476-04896-7_9
Download citation
DOI: https://doi.org/10.1007/978-3-476-04896-7_9
Published:
Publisher Name: J.B. Metzler, Stuttgart
Print ISBN: 978-3-476-04895-0
Online ISBN: 978-3-476-04896-7
eBook Packages: J.B. Metzler Humanities (German Language)