Skip to main content

Acceptance Criteria for Critical Software Based on Testability Estimates and Test Results

  • Conference paper
Safe Comp 96

Abstract

Testability is defined as the probability that a program will fail a test, conditional on the program containing some fault. In this paper, we show that statements about the testability of a program can be more simply described in terms of assumptions on the probability distribution of the failure intensity of the program. We can thus state general acceptance conditions in clear mathematical terms using Bayesian inference. We develop two scenarios, one for software for which the reliability requirements are that the software must be completely fault-free, and another for requirements stated as an upper bound on the acceptable failure probability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Musa JD. Operational profiles in software-reliability engineering. IEEE Software 1993; March: 14–32.

    Google Scholar 

  2. Parnas DL, van Schouwen AJ, Kwan SP. Evaluation of safety-critical software. Communications of the ACM 1990; 33: 636–648.

    Article  Google Scholar 

  3. Miller KW, Morell LJ, Noonan RE, et al. Estimating the probability of failure when testing reveals no failures. IEEE Transactions on Software Engineering 1992; 18: 33–43.

    Article  Google Scholar 

  4. Littlewood B, Strigini L. Validation of ultra-high dependability for software-based systems. Communications of the ACM 1993; 36: 69–80.

    Article  Google Scholar 

  5. Butler RW, Finelli GB. The infeasibility of experimental quantification of life-critical software reliability. In Proc. ACM Conference on Software for Critical Systems, in ACM SIGSOFT Software Eng. Notes, Vol. 16 (5). New Orleans, Louisiana, 1991, pp 66–76.

    Google Scholar 

  6. Hamlet D, Voas J. Faults on its sleeve: amplifying software reliability testing. In Proc. 1993 Int. Symposium on Software Testing and Analysis (ISSTA), in ACM SIGSOFT Software Eng. Notes, Vol. 18 (3). Cambridge, Massachusetts, U.S.A., 1993, pp 89–98.

    Google Scholar 

  7. Voas JM, Miller KW. Improving the software development process using testability research. In Proc. of the Third Int. Symposium on Software Reliability Engineering. 1992, pp 114–121.

    Google Scholar 

  8. Voas JM, Michael CC, Miller KW. Confidently assessing a zero probability of software failure. In Proc. SAFECOMP ’93 12th International Conference on Computer Safety, Reliability and Security. Poznan-Kiekrz, Poland, 1993, pp 197–206.

    Google Scholar 

  9. Voas JM, Michael CC, Miller KW. Confidently assessing a zero probability of software failure. High Integrity Systems 1995; 1: 269–275.

    Google Scholar 

  10. Voas JM, Miller KW. Software testability: The new verification. IEEE Software 1995; May: 17–28.

    Google Scholar 

  11. Bertolino A, Strigini L. On the use of testability measures for dependability assessment. IEEE Transactions on Software Engineering 1996; 22: 97–108.

    Article  Google Scholar 

  12. Bertolino A, Strigini L. Predicting software reliability from testing taking into account other knowledge about a program. In Proc. Quality Week ’96. San Francisco, 1996.

    Google Scholar 

  13. Bertolino A, Strigini L. Is it more convenient to assess a probability of failure or of correctness? Submitted for publication 1996.

    Google Scholar 

  14. Kahnemann D, Slovic P, Tversky A (ed). Judgment under uncertainty: heuristics and biases. Cambridge University Press, 1982.

    Google Scholar 

  15. Strigini L. Engineering judgement in reliability and safety and its limits: what can we learn from research in psychology? SHIP project Technical Report T/030, July, 1994.

    Google Scholar 

  16. Littlewood B, Wright D. On a stopping rule for the operational testing of safety critical software. In Proc. FTCS25 (25th Annual International Symposium on Fault -Tolerant Computing). Pasadena, 1995, pp 444–451.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag London Limited

About this paper

Cite this paper

Bertolino, A., Strigini, L. (1997). Acceptance Criteria for Critical Software Based on Testability Estimates and Test Results. In: Schoitsch, E. (eds) Safe Comp 96. Springer, London. https://doi.org/10.1007/978-1-4471-0937-2_7

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-0937-2_7

  • Publisher Name: Springer, London

  • Print ISBN: 978-3-540-76070-2

  • Online ISBN: 978-1-4471-0937-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics