Skip to main content

Part of the book series: Techno:Phil – Aktuelle Herausforderungen der Technikphilosophie ((TPAHT,volume 1))

Abstract

It is a pervasive feature of today’s life that we rely more and more on technology when making decisions. For example, we often “blindly” follow the instructions of navigation systems when driving. Letting the navigation system “take control” is precisely one of the main reasons to use such a technology in the first place because we usually do not have the time to determine the best route ourselves, especially given the current traffic situation. Moreover, we may even have developed a tendency to see ourselves as less responsible or even to shun responsibility altogether because of this lack of control, like when we say that it was not really us who made the decision but the navigation system. In this paper, I address this claim about our diminished or even lacking moral responsibility when relying on technology in our decision-making. For, if it could be shown that by relying on technology we, indeed, lose a morally relevant form of control, but that we are or should be held responsible for our decisions and their consequences nonetheless, this moral practice would include more and more cases of moral luck, i.e. we would be held morally responsible for things beyond our control. I propose to dub such instances technological moral luck and argue that the stronger we understand the underlying control principle for moral responsibility, the more we will have to accept that moral responsibility becomes a matter of moral luck if we still want to hold agents morally responsible when they rely on technology.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    (Köhler 2018).

  2. 2.

    For an overview of relevant psychological effects, see (Cummings 2006).

  3. 3.

    Accordingly, I will not be concerned with cases of self-driving cars here, where it has been especially argued that a “responsibility gap” occurs, since neither the driver nor the car may be considered morally responsible. While the driver as well as the designers and programmers apparently lack the kind of control necessary for moral responsibility, the self-driving car lacks the prerequisites of being considered a morally responsible agent in the first place. Consequently, there appears to be no one left to whom moral responsibility could be attributed. The same holds for sufficiently autonomous weapons systems, prominently coined “killer robots,” against which the issue is mostly discussed. See (Matthias 2004; Cummings 2006; Sparrow 2007; Hevelke and Nida-Rümelin 2015; Champagne and Tonkens 2015; Danaher 2016, 2019; Gunkel 2017; Santoni de Sio and van den Hoven 2018; Nyholm 2018; Jong 2019; Himmelreich 2019). For an overview of the debate, see (Noorman 2018).

  4. 4.

    For an overview of the debate on moral luck, see (Nelkin 2019).

  5. 5.

    To avoid a possible misunderstanding, the notion of technological moral luck is not meant to refer to an additional kind of moral luck, i.e. in addition to the four seminal kinds of moral luck introduced by Thomas Nagel, namely resultant, circumstantial, constitutive, and causal luck (cp. Nagel 1979, p. 28). The notion is merely meant to highlight technology’s influence on moral luck. For, technology may be the deciding factor 1) whether an agent achieves the intended result (resultant luck), e.g. warning someone in time by calling the person on the phone, 2) whether an agent ends up in a situation in the first place (circumstantial luck), e.g. when using a navigation system or following restaurant suggestions based on analyzed user data, 3) what kind of character an agent develops due to technology’s influence (constitutive luck), e.g. due to the use of social media in his or her upbringing, or taking a certain medication, or being subject to deep brain stimulation, both of which may substantially affect one’s personality, and 4) when it comes to a person’s autonomy in general (causal luck) in case the person has been born as a “designer baby,” at least if one follows the respective criticism. For a concise discussion of this last point, see (Beck forthcoming).

  6. 6.

    For an overview of the notion of and debate on moral responsibility, see (Eshleman 2016).

  7. 7.

    For an overview in this regard, see (Wilson and Shpall 2016, Sect. 1.1).

  8. 8.

    For an overview of the debate on understanding action and attempts, see again (Wilson and Shpall 2016).

  9. 9.

    See (Fischer and Ravizza 1998).

  10. 10.

    Cp. (Fischer and Ravizza 1998, ch. 3).

  11. 11.

    (Fischer and Ravizza 1998, p. 89).

  12. 12.

    For the distinction between attributability and accountability, see prominently (Watson 1996). For the general distinction between causal responsibility, moral responsibility, and praise- and blameworthiness in the context of moral luck, see (Concepcion 2002). For the notion of agent regret, which highlights the agent’s causal and agential, albeit blameless, involvement in what happened and which is not the same as mere spectator regret, see (Baron 1988; Rorty 1980; Williams 1981, pp. 27–31; Kühler 2013, ch. 13).

  13. 13.

    Another underlying issue should be noted at this point. For, the mentioned relation between being in fact responsible and being held responsible may be interpreted either as conceptual or as normative relation. While the former implies that holding someone responsible for something for which the person is in fact not responsible would be conceptually incoherent to begin with, the latter would allow such a practice as indeed conceptually possible but prima facie unfair. Hence, the latter view essentially adopts an ascriptivist notion of moral responsibility, following Peter F. Strawson’s seminal account. See (Strawson 1962) and also (Wallace 1994; McKenna and Russell 2008). I have argued elsewhere in favor of interpreting the control principle in this latter, ascriptivist fashion, i.e. as a normative principle of fairness, including its underlying principle “ought implies can” (cp. Kühler 2012, 2013, 2016).

  14. 14.

    I have discussed this principle at length in (Kühler 2013). See also (Ackeren and Kühler 2016).

  15. 15.

    Of course, such a comprehensive moral practice of holding persons accountable also comprises conditions of fairness, like (reasonably expectable) knowledge, foresight, care, avoidance of negligence, and the absence of manipulation or coercion, which may be put forward as justifications or excuses in order to limit an agent’s accountability.

  16. 16.

    See, again, (Nelkin 2019, Sect. 1), who provides a general overview of the corresponding debate on moral luck.

  17. 17.

    Cp. (Kindhäuser 2011, sect. IV.2.d; Morge 2015). See also (Santoni de Sio and van den Hoven 2018, pp. 8–11; Himmelreich 2019, p. 10) concerning similar tracing accounts of responsibility, according to which an agent may be held responsible for actions of autonomous weapon systems if the outcome in question can be traced back to a relevant decision or action of the agent.

  18. 18.

    This is not to say that the strong version of the control principle is, therefore, wrong or untenable. However, I take it that we are rather hesitant to adopt it in its full force, given our current moral practice. Essentially, this amounts to the corresponding observation of defenders of moral luck, claiming that moral luck is a pervasive element of our moral practice, at least if one adopted the strong version of the control principle.

  19. 19.

    The strong version of the control principle can, of course, be qualified analogously.

  20. 20.

    No wonder, then, that much of the debate on “responsibility gaps” regarding autonomously acting technology, most notably weapons systems or cars, essentially comes down to referring, more or less implicitly, to different types of control as well as to different kinds of responsibility in order to close or highlight specific gaps. See, for instance, (Champagne and Tonkens 2015; Hevelke and Nida-Rümelin 2015; Danaher 2016; Nyholm 2018; Jong 2019; Santoni de Sio and van den Hoven, 2018; Himmelreich 2019).

  21. 21.

    At most, it might be argued that it is fair to hold the driver morally accountable and even considering him or her blameworthy if the driver had shown attributable negligence with regard to the reliability of the car’s brakes beforehand. However, this still raises the question for what exactly the driver may be held morally accountable or considered blameworthy, namely only for the unreliable state of the brakes or also for bumping into the car in front. Assuming the strong version of the control principle, only the former appears to be warranted.

  22. 22.

    Following such a line of thought, John Danaher has recently argued that relying more and more on technology may even put our moral agency as such in jeopardy. Cp. (Danaher 2019).

  23. 23.

    In other words, the challenge to moral responsibility stems from a lack of computational transparency, i.e. how information is processed and results are reached. Moreover, even if the computation were transparent, it would need to be translated into reasons that can be critically reflected, yielding all the problems that such translations bring with them.

  24. 24.

    However, even if there is some positive previous experience with the technology in question, consider more controversial cases, like a judge relying on a predictive algorithm when deciding about putting a child into foster care, instead of leaving him or her with the biological parents. If the judge cannot reflect on the algorithm’s suggestion in terms of good or bad reasons, following it arguably makes it a matter of “blindly” following or trusting it, i.e. trusting that the algorithm produces the intended result in that it is sufficiently functionally equivalent to a well-reasoned respective human decision. For a detailed discussion of this topic, see Thomas Grote’s contribution in this volume.

  25. 25.

    Cp. (Fischer and Ravizza 1998, ch. 8).

  26. 26.

    Cp. (Fischer and Ravizza 1998, pp. 5–7) and, again, (Strawson 1962).

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Kühler .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer-Verlag GmbH Germany, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Kühler, M. (2020). Technological Moral Luck. In: Beck, B., Kühler, M. (eds) Technology, Anthropology, and Dimensions of Responsibility. Techno:Phil – Aktuelle Herausforderungen der Technikphilosophie , vol 1. J.B. Metzler, Stuttgart. https://doi.org/10.1007/978-3-476-04896-7_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-476-04896-7_9

  • Published:

  • Publisher Name: J.B. Metzler, Stuttgart

  • Print ISBN: 978-3-476-04895-0

  • Online ISBN: 978-3-476-04896-7

  • eBook Packages: J.B. Metzler Humanities (German Language)

Publish with us

Policies and ethics