skip to main content
10.1145/3624032.3624036acmotherconferencesArticle/Chapter ViewAbstractPublication PagessastConference Proceedingsconference-collections
research-article

An Approach to Regression Testing Selection based on Code Changes and Smells

Published:17 October 2023Publication History

ABSTRACT

Regression testing is a software engineering maintenance activity that involves re-executing test cases on a modified software system to check whether code changes introduce new faults. However, it can be time-consuming and resource-intensive, especially for large systems. Regression testing selection techniques can help address this issue by selecting a subset of test cases to run. The change-based technique selects a subset of test cases based on the modified software classes, reducing the test suite size. Thereby, it will cover a smaller number of classes, decreasing the efficiency of the test suite to reveal design flaws. From this perspective, code smells are known to identify poor design and threaten the quality of software systems. In this study, we propose an approach to combine code change and smell to select regression tests and present two new techniques: code smell based and code change and smell. Additionally, we developed the Regression Testing Selection Tool (RTST) to automate the selection process. We empirically evaluated the approach in Defects4J projects by comparing the new techniques’ effectiveness with the change-based as a baseline. The results show that the change-based technique achieves the highest reduction rate in the test suite size but with less class coverage. On the other hand, test cases selected using code smells and changed classes combined can potentially find more bugs. The code smell-based technique provides a comparable class coverage to the code change and smell approach. Our findings highlight the benefits of incorporating code smells in regression testing selection and suggest opportunities for improving the efficiency and effectiveness of regression testing.

References

  1. Tiago L. Alves, Christiaan Ypma, and Joost Visser. 2010. Deriving metric thresholds from benchmark data. In 29th International Conference on Software Maintenance (ICSM). 1–10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Aloisio S. Cairo, Glauco Carneiro, AMP Resende, and F Brito e Abreu. 2019. The influence of god class and long method in the occurrence of bugs in two open source software projects: an exploratory study. The influence of god class and long method in the occurrence of bugs in two open source software projects: an exploratory study (2019), 199–204.Google ScholarGoogle Scholar
  3. M. D’Ambros, A. Bacchelli, and M. Lanza. 2010. On the Impact of Design Flaws on Software Defects. In 10th International Conference on Quality Software (QSIC). 23–31.Google ScholarGoogle Scholar
  4. Phongphan Danphitsanuphan and Thanitta Suwantada. 2012. Code smell detecting tool and code smell-structure bug relationship. In Spring Congress on Engineering and Technology (S-CET). 1–5.Google ScholarGoogle ScholarCross RefCross Ref
  5. Sebastian Elbaum, Praveen Kallakuri, Alexey Malishevsky, Gregg Rothermel, and Satya Kanduri. 2003. Understanding the effects of changes on the cost-effectiveness of regression testing techniques. Software testing, verification and reliability 13 (2003), 65–83.Google ScholarGoogle Scholar
  6. Emelie Engström, Per Runeson, and Mats Skoglund. 2010. A systematic review on regression test selection techniques. Information and Software Technology 52, 1 (2010), 14–30.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Eduardo Fernandes, Johnatan Oliveira, Gustavo Vale, Thanis Paiva, and Eduardo Figueiredo. 2016. A review-based comparative study of bad smell detection tools. In 20th International Conference on Evaluation and Assessment in Software Engineering (EASE2016). 18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Eduardo Fernandes, Gustavo Vale, Leonardo Sousa, Eduardo Figueiredo, Alessandro Garcia, and Jaejoon Lee. 2017. No code anomaly is an island anomaly agglomeration as sign of product line instabilities., 48-64 pages.Google ScholarGoogle Scholar
  9. Francesca Arcelli Fontana, Marco Zanoni, Alessandro Marino, and Mika V. Mantyla. 2013. Code smell detection: Towards a machine learning-based approach. In 29th International Conference on Software Maintenance (ICSM). 396–399.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Martin Fowler, Kent Beck, John Brant, William Opdyke, and Don Roberts. 1999. Refactoring: improving the design of existing code. Addison-Wesley Professional.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Mitja Gradišnik, Tina Beranič, Sašo Karakatič, and Goran Mausaš. 2019. Adapting God Class thresholds for software defect prediction: A case study. In 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). 1537–1542.Google ScholarGoogle ScholarCross RefCross Ref
  12. Todd L. Graves, Mary Jean Harrold, Jung-Min Kim, Adam Porter, and Gregg Rothermel. 2001. An empirical study of regression test selection techniques. ACM Transactions on Software Engineering and Methodology (TOSEM) 10, 2 (2001), 184–208.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. René Just, Darioush Jalali, and Michael D Ernst. 2014. Defects4J: A database of existing faults to enable controlled testing studies for Java programs. In Proceedings of the 2014 international symposium on software testing and analysis. 437–440.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Kamaldeep Kaur and Parmeet Kaur. 2017. Evaluation of sampling techniques in software fault prediction using metrics and code smells. In International Conference on Advances in Computing, Communications and Informatics (ICACCI). 1377–1387.Google ScholarGoogle ScholarCross RefCross Ref
  15. A Günes Koru and Hongfang Liu. 2005. An investigation of the effect of module size on defect prediction using static measures. In 2005 Workshop on Predictor Models in Software Engineering (PROMISE). 1–5.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Owolabi Legunsen, Farah Hariri, August Shi, Yafeng Lu, Lingming Zhang, and Darko Marinov. 2016. An extensive study of static regression test selection in modern software evolution. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. 583–594.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Wei Li and Raed Shatnawi. 2007. An empirical study of the bad smells and class error probability in the post-release object-oriented system evolution. Journal of Systems and Software 80, 7 (2007), 1120 – 1128.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Yiling Lou, Junjie Chen, Lingming Zhang, and Dan Hao. 2019. A survey on regression test-case prioritization. In Advances in Computers. Elsevier, 1–46.Google ScholarGoogle Scholar
  19. Chengying Mao and Yansheng Lu. 2005. Regression testing for component-based software systems by enhancing change information. In 12th Asia-Pacific Software Engineering Conference (APSEC’05). 8–pp.Google ScholarGoogle Scholar
  20. Matias Martinez, Thomas Durieux, Romain Sommerard, Jifeng Xuan, and Martin Monperrus. 2017. Automatic repair of real bugs in java: A large-scale experiment on the defects4j dataset. Empirical Software Engineering 22 (2017), 1936–1964.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Glenford J. Myers, Corey Sandler, and Tom Badgett. 2011. The Art of Software Testing. Wiley Publishing.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Rogeres Nascimento and Cláudio Sant’Anna. 2017. Investigating the Relationship between Bad Smells and Bugs in Software Systems. In 11th Brazilian Symposium on Software Components, Architectures, and Reuse (SBCARS). 1–10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Amir Ngah, Malcolm Munro, and Mohammad Abdallah. 2017. An overview of regression testing. Journal of Telecommunication, Electronic and Computer Engineering (JTEC) 9 (2017), 45–49.Google ScholarGoogle Scholar
  24. Jeff Offutt and Paul Ammann. 2008. Introduction to software testing. Cambridge University Press Cambridge.Google ScholarGoogle Scholar
  25. Willian Oizumi, Alessandro Garcia, Leonardo da Silva Sousa, Bruno Cafeo, and Yixue Zhao. 2016. Code anomalies flock together: Exploring code anomaly agglomerations for locating design problems. In 38th International Conference on Software Engineering (ICSE). 440–451.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Steffen M. Olbrich, Daniela S. Cruzes, and Dag IK Sjøberg. 2010. Are all code smells harmful? a study of god classes and brain classes in the evolution of three open source systems. In International Conference on Software Maintenance (ICSME). 1–10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Alessandro Orso, Nanjuan Shi, and Mary Jean Harrold. 2004. Scaling regression testing to large software systems. ACM SIGSOFT Software Engineering Notes 29 (2004), 241–251.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Juliana Padilha, Juliana Pereira, Eduardo Figueiredo, Jussara Almeida, Alessandro Garcia, and Cláudio Sant’Anna. 2014. On the effectiveness of concern metrics to detect code smells: an empirical study. In 26th International Conference on Advanced Information Systems Engineering (CAiSE). 656–671.Google ScholarGoogle ScholarCross RefCross Ref
  29. Thanis Paiva, Amanda Damasceno, Eduardo Figueiredo, and Cláudio Sant’Anna. 2017. On the evaluation of code smells and detection tools. Journal of Software Engineering Research and Development 5 (2017), 1–28.Google ScholarGoogle ScholarCross RefCross Ref
  30. Fabio Palomba, Gabriele Bavota, Massimiliano Di Penta, Fausto Fasano, Rocco Oliveto, and Andrea De Lucia. 2018. On the diffuseness and the impact on maintainability of code smells: a large scale empirical investigation. Empirical Software Engineering 23 (2018), 1188–1221.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Anjaneyulu Pasala and Animesh Bhowmick. 2005. An approach for test suite selection to validate applications on deployment of COTS upgrades. In 12th Asia-Pacific Software Engineering Conference (APSEC’05). 7–pp.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Roger Pressman and Bruce Maxim. 2016. Engenharia de Software-8ª Edição. McGraw Hill Brasil.Google ScholarGoogle Scholar
  33. Raúl H Rosero, Omar S Gómez, and Glen Rodríguez. 2016. 15 years of software regression testing techniques—a survey. International Journal of Software Engineering and Knowledge Engineering 26 (2016), 675–689.Google ScholarGoogle ScholarCross RefCross Ref
  34. Gregg Rothermel and Mary Jean Harrold. 1996. Analyzing regression test selection techniques. IEEE Transactions on software engineering 22 (1996), 529–551.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. August Shi, Milica Hadzi-Tanovic, Lingming Zhang, Darko Marinov, and Owolabi Legunsen. 2019. Reflection-aware static regression test selection. Proceedings of the ACM on Programming Languages 3 (2019), 1–29.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Satwinder Singh and Karanjeet Singh Kahlon. 2012. Effectiveness of refactoring metrics model to identify smelly and error prone classes in open source software. ACM SIGSOFT Software Engineering Notes 37, 2 (2012), 1–11.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Mats Skoglund and Per Runeson. 2005. A case study of the class firewall regression test selection technique on a large scale distributed software system. In 2005 International Symposium on Empirical Software Engineering, 2005. IEEE, 10.Google ScholarGoogle ScholarCross RefCross Ref
  38. Mats Skoglund and Per Runeson. 2007. Improving class firewall regression test selection by removing the class firewall. International journal of software engineering and knowledge engineering 17 (2007), 359–378.Google ScholarGoogle ScholarCross RefCross Ref
  39. Quinten David Soetens, Serge Demeyer, and Andy Zaidman. 2013. Change-Based Test Selection in the Presence of Developer Tests. In 2013 17th European Conference on Software Maintenance and Reengineering. 101–110.Google ScholarGoogle Scholar
  40. Gustavo Vale, Eduardo Fernandes, and Eduardo Figueiredo. 2019. On the proposal and evaluation of a benchmark-based threshold derivation method. Software Quality Journal 27, 1 (2019), 275–306.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Lee White and Brian Robinson. 2004. Industrial real-time regression testing and analysis using firewalls. In 20th IEEE International Conference on Software Maintenance, 2004. Proceedings.18–27.Google ScholarGoogle ScholarCross RefCross Ref
  42. Shin Yoo and Mark Harman. 2012. Regression testing minimization, selection and prioritization: a survey. Software testing, verification and reliability 22 (2012), 67–120.Google ScholarGoogle Scholar
  43. Nico Zazworka, Michele A. Shaw, Forrest Shull, and Carolyn Seaman. 2011. Investigating the impact of design debt on software quality. In 2nd Workshop on Managing Technical Debt (MTD). 17–23.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Nico Zazworka, Antonio Vetro, Clemente Izurieta, Sunny Wong, Yuanfang Cai, Carolyn Seaman, and Forrest Shull. 2014. Comparing four approaches for technical debt identification. Software Quality Journal 22, 3 (2014), 403–426.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Jiang Zheng, Brian Robinson, Laurie Williams, and Karen Smiley. 2006. A lightweight process for change identification and regression test selection in using COTS components. In Fifth International Conference on Commercial-off-the-Shelf (COTS)-Based Software Systems (ICCBSS’05). 7–pp.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. An Approach to Regression Testing Selection based on Code Changes and Smells

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        SAST '23: Proceedings of the 8th Brazilian Symposium on Systematic and Automated Software Testing
        September 2023
        133 pages
        ISBN:9798400716294
        DOI:10.1145/3624032

        Copyright © 2023 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 17 October 2023

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate45of92submissions,49%
      • Article Metrics

        • Downloads (Last 12 months)27
        • Downloads (Last 6 weeks)2

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format