skip to main content
10.1145/3425174.3425214acmotherconferencesArticle/Chapter ViewAbstractPublication PagessastConference Proceedingsconference-collections
research-article

Contributions to improve the combined selection of concurrent software testing techniques

Authors Info & Claims
Published:22 October 2020Publication History

ABSTRACT

[Context] There are a variety of testing techniques available that present different and often complementary characteristics (e.g., cost of application, effectiveness in revealing defects, types of defects). Considering these complementary characteristics make the selection process even more complex. Testers must make decisions on which techniques they will use in a specific situation. Combining different testing techniques outperforms the use of any single technique alone. [Objective] This paper proposes an approach to support the combined selection of testing techniques for concurrent software projects. The approach is implemented in the SeleCTT-v2 tool, supporting testers to find complementary testing techniques in the body of knowledge proposed in SeleCTT-v1. [Method] We conducted a case study to evaluate the combined selection approach (SeleCTT-v2) with the previous version of the SeleCTT-v1 and to investigate how testers perform a combined selection task for concurrent software projects. [Results and Conclusions] The results indicated a rise in the effectiveness of the combined selection of concurrent testing techniques suggested, which demonstrates SeleCTT-v2 is less likely to make an incorrect recommendation of combined techniques for a concurrent software project, thus avoiding costs associated with the incorrect application of a testing technique. If the testers could access a tool that supports the selection process, the effectiveness of the results would have reached a higher value, as evidenced by our approach. Performing the combination of testing techniques is essential to ensure that software under test has all their features tested to prevent possible errors.

References

  1. Norma Barrett, Simon Martin, and Chryssa Dislis. 1999. Test process optimization: Closing the gap in the defect spectrum. In Test Conference, 1999. Proceedings. International. IEEE, 124--129.Google ScholarGoogle ScholarCross RefCross Ref
  2. Jeremy S Bradbury, James R Cordy, and Juergen Dingel. 2007. Comparative assessment of testing and model checking using program mutation. In Testing: Academic and Industrial Conference Practice and Research Techniques-MUTATION (TAICPART-MUTATION 2007). IEEE, 210--222.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Yan Cai, Changjiang Jia, Shangru Wu, Ke Zhai, and Wing Kwong Chan. 2014. ASN: a dynamic barrier-based approach to confirmation of deadlocks from warnings for large-scale multithreaded programs. IEEE Transactions on Parallel and Distributed Systems 26, 1 (2014), 13--23.Google ScholarGoogle ScholarCross RefCross Ref
  4. Domenico Cotroneo, Roberto Pietrantuono, and Stefano Russo. 2013. Testing techniques selection based on ODC fault types and software metrics. Journal of Systems and Software 86, 6 (2013), 1613--1637.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Aurélio da Silva Grande, Arilo Claudio Dias Neto, and Rosiane de Freitas Rodrigues. 2012. Providing trade-off techniques subsets to improve software testing effectiveness: using evolutionary algorithm to support software testing techniques selection by a web tool. In Advances in Artificial Intelligence-SBIA 2012.Google ScholarGoogle Scholar
  6. Italo de Oliveira Santos and Simone do Rocio Senger de Souza. 2019. Study and definition of project attributes for selection of testing techniques for concurrent software. In Anais Estendidos do X Congresso Brasileiro de Software: Teoria e Prática. SBC, Porto Alegre, RS, Brasil, 24--30.Google ScholarGoogle Scholar
  7. Arilo Claudio Dias-Neto and Guilherme Horta Travassos. 2009. Model-based testing approaches selection for software projects. Information and Software Technology 51, 11 (2009), 1487--1504.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Arilo Claudio Dias-Neto and Guilherme Horta Travassos. 2014. Supporting the combined selection of model-based testing techniques. IEEE Transactions on Software Engineering 40, 10 (2014), 1025--1041.Google ScholarGoogle ScholarCross RefCross Ref
  9. Hyunsook Do, Sebastian Elbaum, and Gregg Rothermel. 2005. Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software Engineering 10, 4 (2005), 405--435.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Matthew B Dwyer, John Hatcliff, Matthew Hoosier, Venkatesh Ranganath, Todd Wallentine, et al. 2006. Evaluating the effectiveness of slicing for model reduction of concurrent object-oriented programs. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems. Springer, 73--89.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Matthew B Dwyer, Suzette Person, and Sebastian Elbaum. 2006. Controlling factors in evaluating path-sensitive error detection techniques. In Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering. 92--104.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Albert Endres and H Dieter Rombach. 2003. A handbook of software and systems engineering: Empirical observations, laws, and theories. Pearson Education.Google ScholarGoogle Scholar
  13. Yaniv Eytani, Klaus Havelund, Scott D Stoller, and Shmuel Ur. 2007. Towards a framework and a benchmark for testing tools for multi-threaded programs. Concurrency and Computation: Practice and Experience 19, 3 (2007), 267--279.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Phyllis G Frankl and Stewart N Weiss. 1993. An experimental comparison of the effectiveness of branch testing and data flow testing. IEEE Transactions on Software Engineering 19, 8 (1993), 774--787.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Vahid Garousi, Michael Felderer, Marco Kuhrmann, Kadir Herkiloğlu, and Sigrid Eldh. 2020. Exploring the industry's challenges in software testing: An empirical study. Journal of Software: Evolution and Process (2020), e2251.Google ScholarGoogle Scholar
  16. Milos Gligoric, Vilas Jagannath, and Darko Marinov. 2010. MuTMuT: Efficient exploration for mutation testing of multithreaded code. In 2010 Third International Conference on Software Testing, Verification and Validation. IEEE, 55--64.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Robert J Grissom and John J Kim. 2005. Effect sizes for research: A broad practical approach. Lawrence Erlbaum Associates Publishers.Google ScholarGoogle Scholar
  18. Ronald Ibarra and Glen Rodriguez. 2018. SoTesTeR: Software Testing Techniques' Recommender System Using a Collaborative Approach. In Annual International Symposium on Information Management and Big Data. Springer, 289--303.Google ScholarGoogle Scholar
  19. Devin Kester, Martin Mwebesa, and Jeremy S Bradbury. 2010. How good is static analysis at finding concurrency bugs?. In 2010 10th IEEE Working Conference on Source Code Analysis and Manipulation. IEEE, 115--124.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Qingzhou Luo, Sai Zhang, Jianjun Zhao, and Min Hu. 2010. A lightweight and portable approach to making concurrent failures reproducible. In International Conference on Fundamental Approaches to Software Engineering. Springer.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Silvana M. Melo, Felipe M. Moura, Paulo S. L. Souza, and Simone R. S. Souza. 2019. SeleCTT: An Infrastructure for Selection of Concurrent Software Testing Techniques. In Proceedings of the IV Brazilian Symposium on Systematic and Automated Software Testing (SAST 2019). Association for Computing Machinery, New York, NY, USA, 62--71. https://doi.org/10.1145/3356317.3356324Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Silvana Morita Melo, Simone Rocio Senger Souza, Paulo Sergio Lopes Souza, and Jeffrey C Carver. 2017. How to test your concurrent software: an approach for the selection of testing techniques. In Proceedings of the 4th ACM SIGPLAN International Workshop on Software Engineering for Parallel Systems. ACM, 42--43.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Shinichi Nakagawa and Innes C Cuthill. 2007. Effect size, confidence interval and statistical significance: a practical guide for biologists. Biological reviews (2007).Google ScholarGoogle Scholar
  24. Arilo Claudio Dias Neto, Rosiane de Freitas Rodrigues, and Guilherme Horta Travassos. 2011. Porantim-opt: Optimizing the combined selection of model-based testing techniques. In Software Testing, Verification and Validation Workshops (ICSTW), 2011 IEEE Fourth International Conference on. IEEE, 174--183.Google ScholarGoogle Scholar
  25. Arilo Claudio Dias Neto and Guilherme Horta Travassos. 2009. Porantim: An approach to support the combination and selection of Model-based Testing techniques. In Automation of Software Test, 2009. AST'09. ICSE Workshop. IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  26. Niloofar Razavi, Azadeh Farzan, and Sheila A McIlraith. 2014. Generating effective tests for concurrent programs via AI automated planning techniques. International Journal on Software Tools for Technology Transfer 16, 1 (2014).Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Per Runeson, Martin Host, Austen Rainer, and Bjorn Regnell. 2012. Case study research in software engineering: Guidelines and examples. John Wiley & Sons.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Malavika Samak and Murali Krishna Ramanathan. 2014. Multithreaded test synthesis for deadlock detection. In Proceedings of the 2014 ACM International Conference on Object Oriented Programming Systems Languages & Applications.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Silvia Santa Isabel and Guilherme H Travassos. 2011. Features of software testing techniques for use in projects web. 14th Ibero-American Conference on Software Engineering and 14th Workshop on Requirements Engineering, CIbSE 2011 (2011).Google ScholarGoogle Scholar
  30. Italo Santos, Silvana Morita Melo, Paulo Sergio Lopes de Souza, and Simone RS Souza. 2020. Towards a unified catalog of attributes to guide industry in software testing technique selection. In 2020 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW). IEEE, 398--407.Google ScholarGoogle ScholarCross RefCross Ref
  31. Italo Santos, Silvana M Melo, Paulo Sergio Lopes Souza, and Simone Rocio Senger Souza. 2019. Testing Techniques Selection: A Systematic Mapping Study. In Proceedings of the XXXIII Brazilian Symposium on Software Engineering. ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Koushik Sen. 2007. Effective random testing of concurrent programs. In Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering. 323--332.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Theodorus E Setiadi, Akihiko Ohsuga, and Mamoru Maekawa. 2013. Efficient execution path exploration for detecting races in concurrent programs. IAENG International Journal of Computer Science 40, 3 (2013), 143--163.Google ScholarGoogle Scholar
  34. Serdar Tasiran, M Erkan Keremoğlu, and Kivanç Muşlu. 2012. Location pairs: a test coverage metric for shared-memory concurrent programs. Empirical Software Engineering 17, 3 (2012), 129--165.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Sira Vegas and Victor Basili. 2005. A characterisation schema for software testing techniques. Empirical Software Engineering 10, 4 (2005), 437--466.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Monisha Victor and Nitin Upadhyay. 2011. Selection of software testing technique: A multi criteria decision making approach. In Trends in Computer Science, Engineering and Information Technology. Springer, 453--462.Google ScholarGoogle Scholar
  37. William Visser, SeungJoon Park, and John Penix. 2000. Using predicate abstraction to reduce object-oriented programs for model checking. In Proceedings of the third workshop on Formal methods in software practice. 3--182.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Claes Wohlin, Per Runeson, Martin Höst, Magnus C Ohlsson, Björn Regnell, and Anders Wesslén. 2012. Experimentation in software engineering. Springer Science & Business Media.Google ScholarGoogle ScholarCross RefCross Ref
  39. Margaret A Wojcicki and Paul Strooper. 2007. An iterative empirical strategy for the systematic selection of a combination of verification and validation technologies. In Proceedings of the 5th International Workshop on Software Quality. IEEE Computer Society, 9.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Rui Xin, Zhengwei Qi, Shiqiu Huang, Chengcheng Xiang, Yudi Zheng, Yin Wang, and Haibing Guan. 2013. An automation-assisted empirical study on lock usage for concurrent programs. In 2013 IEEE International Conference on Software Maintenance. IEEE, 100--109.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Xiaozhen Xue, Sima Siami-Namini, and Akbar Siami Namin. 2018. Testing MultiThreaded Applications Using Answer Set Programming. International Journal of Software Engineering and Knowledge Engineering 28, 08 (2018), 1151--1175.Google ScholarGoogle ScholarCross RefCross Ref
  42. Ke Zhai, Boni Xu, WK Chan, and TH Tse. 2012. CARISMA: a context-sensitive approach to race-condition sample-instance selection for multithreaded applications. In Proceedings of the 2012 International Symposium on Software Testing and Analysis. 221--231.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Contributions to improve the combined selection of concurrent software testing techniques

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      SAST '20: Proceedings of the 5th Brazilian Symposium on Systematic and Automated Software Testing
      October 2020
      126 pages
      ISBN:9781450387552
      DOI:10.1145/3425174

      Copyright © 2020 ACM

      © 2020 Association for Computing Machinery. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 22 October 2020

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      Overall Acceptance Rate45of92submissions,49%
    • Article Metrics

      • Downloads (Last 12 months)13
      • Downloads (Last 6 weeks)2

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader