ABSTRACT
[Context] There are a variety of testing techniques available that present different and often complementary characteristics (e.g., cost of application, effectiveness in revealing defects, types of defects). Considering these complementary characteristics make the selection process even more complex. Testers must make decisions on which techniques they will use in a specific situation. Combining different testing techniques outperforms the use of any single technique alone. [Objective] This paper proposes an approach to support the combined selection of testing techniques for concurrent software projects. The approach is implemented in the SeleCTT-v2 tool, supporting testers to find complementary testing techniques in the body of knowledge proposed in SeleCTT-v1. [Method] We conducted a case study to evaluate the combined selection approach (SeleCTT-v2) with the previous version of the SeleCTT-v1 and to investigate how testers perform a combined selection task for concurrent software projects. [Results and Conclusions] The results indicated a rise in the effectiveness of the combined selection of concurrent testing techniques suggested, which demonstrates SeleCTT-v2 is less likely to make an incorrect recommendation of combined techniques for a concurrent software project, thus avoiding costs associated with the incorrect application of a testing technique. If the testers could access a tool that supports the selection process, the effectiveness of the results would have reached a higher value, as evidenced by our approach. Performing the combination of testing techniques is essential to ensure that software under test has all their features tested to prevent possible errors.
- Norma Barrett, Simon Martin, and Chryssa Dislis. 1999. Test process optimization: Closing the gap in the defect spectrum. In Test Conference, 1999. Proceedings. International. IEEE, 124--129.Google ScholarCross Ref
- Jeremy S Bradbury, James R Cordy, and Juergen Dingel. 2007. Comparative assessment of testing and model checking using program mutation. In Testing: Academic and Industrial Conference Practice and Research Techniques-MUTATION (TAICPART-MUTATION 2007). IEEE, 210--222.Google ScholarDigital Library
- Yan Cai, Changjiang Jia, Shangru Wu, Ke Zhai, and Wing Kwong Chan. 2014. ASN: a dynamic barrier-based approach to confirmation of deadlocks from warnings for large-scale multithreaded programs. IEEE Transactions on Parallel and Distributed Systems 26, 1 (2014), 13--23.Google ScholarCross Ref
- Domenico Cotroneo, Roberto Pietrantuono, and Stefano Russo. 2013. Testing techniques selection based on ODC fault types and software metrics. Journal of Systems and Software 86, 6 (2013), 1613--1637.Google ScholarDigital Library
- Aurélio da Silva Grande, Arilo Claudio Dias Neto, and Rosiane de Freitas Rodrigues. 2012. Providing trade-off techniques subsets to improve software testing effectiveness: using evolutionary algorithm to support software testing techniques selection by a web tool. In Advances in Artificial Intelligence-SBIA 2012.Google Scholar
- Italo de Oliveira Santos and Simone do Rocio Senger de Souza. 2019. Study and definition of project attributes for selection of testing techniques for concurrent software. In Anais Estendidos do X Congresso Brasileiro de Software: Teoria e Prática. SBC, Porto Alegre, RS, Brasil, 24--30.Google Scholar
- Arilo Claudio Dias-Neto and Guilherme Horta Travassos. 2009. Model-based testing approaches selection for software projects. Information and Software Technology 51, 11 (2009), 1487--1504.Google ScholarDigital Library
- Arilo Claudio Dias-Neto and Guilherme Horta Travassos. 2014. Supporting the combined selection of model-based testing techniques. IEEE Transactions on Software Engineering 40, 10 (2014), 1025--1041.Google ScholarCross Ref
- Hyunsook Do, Sebastian Elbaum, and Gregg Rothermel. 2005. Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software Engineering 10, 4 (2005), 405--435.Google ScholarDigital Library
- Matthew B Dwyer, John Hatcliff, Matthew Hoosier, Venkatesh Ranganath, Todd Wallentine, et al. 2006. Evaluating the effectiveness of slicing for model reduction of concurrent object-oriented programs. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems. Springer, 73--89.Google ScholarDigital Library
- Matthew B Dwyer, Suzette Person, and Sebastian Elbaum. 2006. Controlling factors in evaluating path-sensitive error detection techniques. In Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering. 92--104.Google ScholarDigital Library
- Albert Endres and H Dieter Rombach. 2003. A handbook of software and systems engineering: Empirical observations, laws, and theories. Pearson Education.Google Scholar
- Yaniv Eytani, Klaus Havelund, Scott D Stoller, and Shmuel Ur. 2007. Towards a framework and a benchmark for testing tools for multi-threaded programs. Concurrency and Computation: Practice and Experience 19, 3 (2007), 267--279.Google ScholarDigital Library
- Phyllis G Frankl and Stewart N Weiss. 1993. An experimental comparison of the effectiveness of branch testing and data flow testing. IEEE Transactions on Software Engineering 19, 8 (1993), 774--787.Google ScholarDigital Library
- Vahid Garousi, Michael Felderer, Marco Kuhrmann, Kadir Herkiloğlu, and Sigrid Eldh. 2020. Exploring the industry's challenges in software testing: An empirical study. Journal of Software: Evolution and Process (2020), e2251.Google Scholar
- Milos Gligoric, Vilas Jagannath, and Darko Marinov. 2010. MuTMuT: Efficient exploration for mutation testing of multithreaded code. In 2010 Third International Conference on Software Testing, Verification and Validation. IEEE, 55--64.Google ScholarDigital Library
- Robert J Grissom and John J Kim. 2005. Effect sizes for research: A broad practical approach. Lawrence Erlbaum Associates Publishers.Google Scholar
- Ronald Ibarra and Glen Rodriguez. 2018. SoTesTeR: Software Testing Techniques' Recommender System Using a Collaborative Approach. In Annual International Symposium on Information Management and Big Data. Springer, 289--303.Google Scholar
- Devin Kester, Martin Mwebesa, and Jeremy S Bradbury. 2010. How good is static analysis at finding concurrency bugs?. In 2010 10th IEEE Working Conference on Source Code Analysis and Manipulation. IEEE, 115--124.Google ScholarDigital Library
- Qingzhou Luo, Sai Zhang, Jianjun Zhao, and Min Hu. 2010. A lightweight and portable approach to making concurrent failures reproducible. In International Conference on Fundamental Approaches to Software Engineering. Springer.Google ScholarDigital Library
- Silvana M. Melo, Felipe M. Moura, Paulo S. L. Souza, and Simone R. S. Souza. 2019. SeleCTT: An Infrastructure for Selection of Concurrent Software Testing Techniques. In Proceedings of the IV Brazilian Symposium on Systematic and Automated Software Testing (SAST 2019). Association for Computing Machinery, New York, NY, USA, 62--71. https://doi.org/10.1145/3356317.3356324Google ScholarDigital Library
- Silvana Morita Melo, Simone Rocio Senger Souza, Paulo Sergio Lopes Souza, and Jeffrey C Carver. 2017. How to test your concurrent software: an approach for the selection of testing techniques. In Proceedings of the 4th ACM SIGPLAN International Workshop on Software Engineering for Parallel Systems. ACM, 42--43.Google ScholarDigital Library
- Shinichi Nakagawa and Innes C Cuthill. 2007. Effect size, confidence interval and statistical significance: a practical guide for biologists. Biological reviews (2007).Google Scholar
- Arilo Claudio Dias Neto, Rosiane de Freitas Rodrigues, and Guilherme Horta Travassos. 2011. Porantim-opt: Optimizing the combined selection of model-based testing techniques. In Software Testing, Verification and Validation Workshops (ICSTW), 2011 IEEE Fourth International Conference on. IEEE, 174--183.Google Scholar
- Arilo Claudio Dias Neto and Guilherme Horta Travassos. 2009. Porantim: An approach to support the combination and selection of Model-based Testing techniques. In Automation of Software Test, 2009. AST'09. ICSE Workshop. IEEE.Google ScholarCross Ref
- Niloofar Razavi, Azadeh Farzan, and Sheila A McIlraith. 2014. Generating effective tests for concurrent programs via AI automated planning techniques. International Journal on Software Tools for Technology Transfer 16, 1 (2014).Google ScholarDigital Library
- Per Runeson, Martin Host, Austen Rainer, and Bjorn Regnell. 2012. Case study research in software engineering: Guidelines and examples. John Wiley & Sons.Google ScholarDigital Library
- Malavika Samak and Murali Krishna Ramanathan. 2014. Multithreaded test synthesis for deadlock detection. In Proceedings of the 2014 ACM International Conference on Object Oriented Programming Systems Languages & Applications.Google ScholarDigital Library
- Silvia Santa Isabel and Guilherme H Travassos. 2011. Features of software testing techniques for use in projects web. 14th Ibero-American Conference on Software Engineering and 14th Workshop on Requirements Engineering, CIbSE 2011 (2011).Google Scholar
- Italo Santos, Silvana Morita Melo, Paulo Sergio Lopes de Souza, and Simone RS Souza. 2020. Towards a unified catalog of attributes to guide industry in software testing technique selection. In 2020 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW). IEEE, 398--407.Google ScholarCross Ref
- Italo Santos, Silvana M Melo, Paulo Sergio Lopes Souza, and Simone Rocio Senger Souza. 2019. Testing Techniques Selection: A Systematic Mapping Study. In Proceedings of the XXXIII Brazilian Symposium on Software Engineering. ACM.Google ScholarDigital Library
- Koushik Sen. 2007. Effective random testing of concurrent programs. In Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering. 323--332.Google ScholarDigital Library
- Theodorus E Setiadi, Akihiko Ohsuga, and Mamoru Maekawa. 2013. Efficient execution path exploration for detecting races in concurrent programs. IAENG International Journal of Computer Science 40, 3 (2013), 143--163.Google Scholar
- Serdar Tasiran, M Erkan Keremoğlu, and Kivanç Muşlu. 2012. Location pairs: a test coverage metric for shared-memory concurrent programs. Empirical Software Engineering 17, 3 (2012), 129--165.Google ScholarDigital Library
- Sira Vegas and Victor Basili. 2005. A characterisation schema for software testing techniques. Empirical Software Engineering 10, 4 (2005), 437--466.Google ScholarDigital Library
- Monisha Victor and Nitin Upadhyay. 2011. Selection of software testing technique: A multi criteria decision making approach. In Trends in Computer Science, Engineering and Information Technology. Springer, 453--462.Google Scholar
- William Visser, SeungJoon Park, and John Penix. 2000. Using predicate abstraction to reduce object-oriented programs for model checking. In Proceedings of the third workshop on Formal methods in software practice. 3--182.Google ScholarDigital Library
- Claes Wohlin, Per Runeson, Martin Höst, Magnus C Ohlsson, Björn Regnell, and Anders Wesslén. 2012. Experimentation in software engineering. Springer Science & Business Media.Google ScholarCross Ref
- Margaret A Wojcicki and Paul Strooper. 2007. An iterative empirical strategy for the systematic selection of a combination of verification and validation technologies. In Proceedings of the 5th International Workshop on Software Quality. IEEE Computer Society, 9.Google ScholarDigital Library
- Rui Xin, Zhengwei Qi, Shiqiu Huang, Chengcheng Xiang, Yudi Zheng, Yin Wang, and Haibing Guan. 2013. An automation-assisted empirical study on lock usage for concurrent programs. In 2013 IEEE International Conference on Software Maintenance. IEEE, 100--109.Google ScholarDigital Library
- Xiaozhen Xue, Sima Siami-Namini, and Akbar Siami Namin. 2018. Testing MultiThreaded Applications Using Answer Set Programming. International Journal of Software Engineering and Knowledge Engineering 28, 08 (2018), 1151--1175.Google ScholarCross Ref
- Ke Zhai, Boni Xu, WK Chan, and TH Tse. 2012. CARISMA: a context-sensitive approach to race-condition sample-instance selection for multithreaded applications. In Proceedings of the 2012 International Symposium on Software Testing and Analysis. 221--231.Google ScholarDigital Library
Index Terms
- Contributions to improve the combined selection of concurrent software testing techniques
Recommendations
SeleCTT: An Infrastructure for Selection of Concurrent Software Testing Techniques
SAST '19: Proceedings of the IV Brazilian Symposium on Systematic and Automated Software Testing[Background]: A variety of software testing techniques have been published by the academia in the last years, however, the industry rarely embraces their use. The transference of knowledge between academia and industry is a challenge which is mainly ...
Testing Techniques Selection: A Systematic Mapping Study
SBES '19: Proceedings of the XXXIII Brazilian Symposium on Software Engineering[Context] Software projects must consider the selection of testing techniques and criteria during their life cycles. This practice increases the chances of testing activity to be appropriately performed. In a previous work, an infrastructure to support ...
Bio-Inspired Optimization of Test Data Generation for Concurrent Software
Search-Based Software EngineeringAbstractConcurrent software includes a number of key features such as communication, concurrency, and non-determinism, which increase the complexity of software testing. One of the main challenges is the test data generation. Techniques of search-based ...
Comments