Skip to main content

Trusting Computer Simulations

  • Chapter
  • First Online:
Computer Simulations in Science and Engineering

Part of the book series: The Frontiers Collection ((FRONTCOLL))

Abstract

Relying on computer simulations and trusting their results is key for the epistemic future of this new research methodology. The questions that interest us in this chapter are how do researchers typically build reliability on computer simulations? and what exactly would it mean to trust results of computer simulations? When we attempt to answer these questions, a dilemma is raised. On the one hand, it seems that a machine cannot be entirely reliable in the sense that they are not capable of rendering absolutely correct results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 29.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 37.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 37.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Since I have also referred to computer simulations as methods, we could equally say that they are reliable computational methods. I use them indistinguishably.

  2. 2.

    Let me make explicit that the following analysis has strong commitments to representation of a target system. The reason for taking this route is because most researchers are more interested in computer simulations that implement models that represent a target system. However, a non-representationalist viewpoint is also possible and desirable, that is, one that admits that claims such as ‘the results suggest an increase of temperature in the Arctic as predicted by theory’ and ‘the results are consistent with experimental results,’ are sound claims, instead of merely ‘the results are correct of the target system.’ This shift means that computer simulations are reliable processes despite of not representing a target system.

  3. 3.

    Knowing and understanding are concepts that express our epistemological states and, in a sense, they can be taken to be ‘mental.’ If so, then neuroscience and psychology are disciplines better prepared to account for these concepts. Another way to analyze them consists in studying the concepts in themselves, showing their assumptions and consequences, and studying their logical structure. It is the latter sense in which philosophers typically discuss the concepts of knowledge and understanding.

  4. 4.

    There are many good philosophical works on the notion of knowledge. The specialized literature includes Steup and Sosa (2005), Haddock et al. (2009) and Pritchard (2013).

  5. 5.

    Strictly speaking, p should read: ‘the results of their simulations are correct’, and therefore the researchers are justified in believing that p is true. To simplify matters, I will simply say that researchers are justified in believing that the results of their simulations are correct. This last sentence, of course, is taken to be true.

  6. 6.

    Let us note that these examples show that a reliable process can be purely cognitive, as in a reasoning process; or external to our mind, as the example of a tree outside my window shows.

  7. 7.

    As mentioned in the first footnote, we do not strictly need representation. Computer simulations could be reliable for cases when they do not represent, such as when the implemented model is well-grounded and it has been correctly implemented. I shall not discuss such cases.

  8. 8.

    The example also works for showing the inaccuracies introduced by computing “sin(0.1) in IEEE single precision floating point.

  9. 9.

    For more on this issue, see McFarland and Mahadevan (2008), Kennedy and O’Hagan (2001), and Trucano et al. (2006).

  10. 10.

    Also known as ‘internal validity’ and ‘external validity,’ respectively.

  11. 11.

    To be more fair with Oberkampf’s, Truncano’s, Roy’s, and Hirsch’s general proposal, I must also mention the analysis on uncertainty and how it propagates throughout the process of designing, programming, and running computer simulations. For more philosophical treatment on verification and validation, as well as concrete examples see Oreskes et al. (1994); Küppers and Lenhard (2005); Hasse and Lenhard (2017).

  12. 12.

    See Oberkampf and Roy (2010), 21–29 for an analysis on the diversity of concepts. Also, see Salari and Kambiz (2003), Sargent (2007), Naylor (1967a, b).

  13. 13.

    Also referred to as solution verification in Oberkampf and Roy (2010), 26, and as numerical error estimation in Oberkampf et al. (2003, 26).

  14. 14.

    Whereas this is a valid claim for some form of experimentations in science, it is not so for others such as economics and psychology.

  15. 15.

    This claim is widely attributed to Edsger Dijkstra.

  16. 16.

    Strictly speaking, Ajelli et al. are doing robustness analysis (Weisberg 2013).

  17. 17.

    For a more detailed discussion on Fig. 4.1, see Oberkampf and Roy (2010, 30).

  18. 18.

    For an excellent analysis on errors and how they affect scientific practice in general, see Mayo (2010), Mayo and Spanos (2010a). For how errors affect computer science in particular, see Jason (1989). And for the role of errors in computer science, see Parker (2008). I am taking here that errors negatively affect computation.

  19. 19.

    For an overview of errors in the design and production cycle of computational systems, see Seibel (2009); Fresco and Primiero (2013); Floridi et al. (2015).

  20. 20.

    Arthur Stephenson, chairman of the Mars Climate Orbiter Mission Failure Investigation Board, actually believed that this was the main cause of losing contact with the Mars Climate Orbiter probe. See Douglas and Savage (1999).

  21. 21.

    Provided, naturally, that there are no hardware errors involved.

  22. 22.

    This is the interpretation of the exchange between energy and angular momentum (Woolfson and Pert 1999a, 18). Let it be noted that the authors do not speak of ‘errors’, but solely of rounding-off the orbit. This example also shows that rounding-off errors can be interpreted as an inherent part of the programing of a computer simulation. This, of course, does not prevent them from qualifying as ‘errors.’

  23. 23.

    For an example of epistemically opaque but successful computer simulations, see Lenhard (2006).

  24. 24.

    Another author’s ideas on epistemic opacity and epistemic trust worth considering is Julian Newman. To Newman, epistemic opacity is a symptom that modelers have failed to adopt sound practices of software engineering (Newman 2016). Instead, by means of developing the right engineering and social practices, Newman argues, modelers would be able to avoid several forms of epistemic opacity and, ultimately, reject Humphreys’ assertion that computers are a superior epistemic authority. As he explicitly puts it: “[\(\dots \)] well architected software is not epistemically opaque: its modular structure will facilitate reduction of initial errors, recognition and correction of those errors that are perpetrated, and later systematic integration of new software components” (Newman 2016, 257).

  25. 25.

    Humphreys has used a similar argument to point out that researchers do not need to know the details of an instrument in order to know that the results of such an instrument are correct (e.g., that the observed entity actually exists) (Humphreys 2009, 618).

  26. 26.

    Humphreys himself draw parallels between social processes and social epistemology, and concludes that there is no real novelty in either that would affect computer simulations to a greater extent than they affect any other scientific, artistic, or engineering discipline (Humphreys 2009, 619).

  27. 27.

    In fact, reliabilism might be used to circumvent all forms of opacity (v.gr. social opacity, technological opacity, and internal mathematical opacity).

  28. 28.

    I introduce and discuss reliabilism in the context of computer simulations for the first time in Durán (2014).

  29. 29.

    In Durán and Formanek (2018a), we extend the sources of reliability to a history of (un)successful computer simulations, robustness analysis, and the role of the expert in sanctioning computer simulations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Juan Manuel Durán .

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Durán, J.M. (2018). Trusting Computer Simulations. In: Computer Simulations in Science and Engineering. The Frontiers Collection. Springer, Cham. https://doi.org/10.1007/978-3-319-90882-3_4

Download citation

Publish with us

Policies and ethics