skip to main content
10.1145/2372233.2372239acmconferencesArticle/Chapter ViewAbstractPublication PagesesemConference Proceedingsconference-collections
research-article

Indirect effects in evidential assessment: a case study on regression test technology adoption

Authors Info & Claims
Published:22 September 2012Publication History

ABSTRACT

Background: There is a need for efficient regression testing in most software development organizations. Often the proposed solutions involve automation. However, despite this being a well researched area, research results are rarely applied in industrial practice. Aim: In this paper we aim to bridge the gap between research and practice by providing examples of how evidence-based regression testing approaches can be adopted in industry. We also discuss challenges for the research community. Method: An industrial case study was carried out to evaluate the possibility to improve regression testing at Sony Ericsson Mobile Communications. We analyse the procedure undertaken based on frameworks from the evidence based software engineering, EBSE, paradigm (with a focus on the evidence) and automation literature (with a focus on the practical effects). Results: Our results pinpoint the need for systematic approaches when introducing a new tool. Practitioners and researchers need congruent guidelines supporting the appraisal of both the evidence base and the pragmatic effects, both direct but also indirect, of the changes. This is illustrated by the introduction of the automation perspective.

References

  1. M. Borg. Findability through traceability - a realistic application of candidate trace links? In 7th International Conference on Evaluation of Novel Approaches to Software Engineering, ENASE 2012, pages 173--181, June 2012.Google ScholarGoogle Scholar
  2. P. Clements and L. Northrop. Software Product Lines: Practices and Patterns. Addison-Wesley, Boston, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. T. Dybå, B. A. Kitchenham, and M. Jorgensen. Evidence-based software engineering for practitioners. IEEE Software, 22(1):58 -- 65, Jan. 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. E. Engström and P. Runeson. A qualitative survey of regression testing practices. In M. Ali Babar, M. Vierimaa, and M. Oivo, editors, Product-Focused Software Process Improvement, volume 6156 of Lecture Notes in Computer Science, pages 3--16. Springer Berlin / Heidelberg, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. E. Engström and P. Runeson. Test overlay in an emerging software product line -- an industrial case study. Information and Software Technology, 2012.Google ScholarGoogle Scholar
  6. E. Engström, P. Runeson, and A. Ljung. Improving regression testing transparency and efficiency with history based prioritization -- an industrial case study. In Proceedings of the 4th International Conference on Software Testing Verification and Validation. IEEE Computer Society, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. E. Engström, P. Runeson, and M. Skoglund. A systematic review on regression test selection techniques. Information and Software Technology, 52(1):14--30, Jan. 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Y. Fazlalizadeh, A. Khalilian, M. A. Azgomi, and S. Parsa. Prioritizing test cases for resource constraint environments using historical test case performance data. In Computer Science and Information Technology, 2009. ICCSIT 2009. 2nd IEEE International Conference on, pages 190--195, Aug. 2009.Google ScholarGoogle ScholarCross RefCross Ref
  9. M. Fewster and D. Graham. Software Test Automation. Addison-Wesley Professional, Sept. 1999.Google ScholarGoogle Scholar
  10. S. Mujtaba, R. Feldt, and K. Petersen. Waste and lead time reduction in a software product customization process with value stream maps. In Software Engineering Conference (ASWEC), 2010 21st Australian, pages 139--148. IEEE, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. R. Parasuraman, T. B. Sheridan, and C. D. Wickens. A model for types and levels of human interaction with automation. Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on, 30(3):286--297, May 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. K. Petersen and C. Wohlin. Context in industrial software engineering research. In 3rd International Symposium on Empirical Software Engineering and Measurement, 2009. ESEM 2009, ESEM '09, pages 401--404, Washington, DC, USA, Oct. 2009. IEEE. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. A. Rainer and S. Beecham. A follow-up empirical evaluation of evidence based software engineering by undergraduate students. In 12th International Conference on Evaluation and Assessment in Software Engineering, University of Bari, Italy, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. A. Rainer, T. Hall, and N. Baddoo. A preliminary empirical investigation of the use of evidence based software engineering by under-graduate students. https://uhra.herts.ac.uk/dspace/handle/2299/2270, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. G. Rothermel and M. J. Harrold. Analyzing regression test selection techniques. IEEE Transactions on Software Engineering, 22(8):529--551, Aug. 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. G. Rothermel, R. H. Untch, C. Chengyun, and M. J. Harrold. Test case prioritization: an empirical study. In Proceedings IEEE International Conference on Software Maintenance, pages 179--188, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. T. B. Sheridan and W. L. Verplank. Human and computer control of undersea teleoperators. Technical report, July 1978.Google ScholarGoogle Scholar
  18. D. Talby, A. Keren, O. Hazzan, and Y. Dubinsky. Agile software testing in a large-scale project. Software, IEEE, 23(4):30--37, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. S. Yoo and M. Harman. Regression testing minimization, selection and prioritization: a survey. Software Testing, Verification and Reliability, 22(2):67--120, Mar. 2012. Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Indirect effects in evidential assessment: a case study on regression test technology adoption

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      EAST '12: Proceedings of the 2nd international workshop on Evidential assessment of software technologies
      September 2012
      72 pages
      ISBN:9781450315098
      DOI:10.1145/2372233

      Copyright © 2012 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 22 September 2012

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader