skip to main content
10.1145/3483899.3483909acmotherconferencesArticle/Chapter ViewAbstractPublication PagessbcarsConference Proceedingsconference-collections
research-article

Guidelines for Promoting Software Product Line Experiments

Published:05 October 2021Publication History

ABSTRACT

The importance of experimentation for Software Engineering research has been notably established in the last years. The software engineering community has discussed how to proper report and evaluate experiments using different approaches, such as quality criteria, scales, and checklists. Nevertheless, there are no guidelines to support researchers and practitioners active in straightforward software engineering research areas, as in Software Product Lines (SPL), at conducting experiments. We hypothesize that experimentation guidelines may aid such a specific area by providing advice and actual excerpts reflecting good practices of SPL experimentation, thus experimentally evolving this area. Therefore, the goal of this paper is to provide guidelines for properly reporting and promoting SPL experiments. We defined such guidelines based on well-known software engineering experiment reports, quality evaluation checklists, and data extracted from 211 SPL experiments identified in a systematic mapping study. We evaluated the guidelines with a qualitative study with SPL and experimentation experts applying open and axial coding procedures. The evaluation enabled us to improve the guidelines. The resulting guidelines contain specific advice to researchers active in SPL and provide examples taken from published SPL experiments. The experts’ positive points indicate that the proposed guidelines can aid SPL researchers and practitioners. Sharing the resulting guidelines could support conducting SPL experiments and allow further area evolution based on prospective experiment replications and reproductions from well-designed and reported experiments.

References

  1. Mohsen Asadi, Samaneh Soltani, Dragan Gašević, and Marek Hatala. 2016. The effects of visualization and interaction techniques on feature model configuration. Empirical Software Engineering 21 (2016), 1–38.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Victor R. Basili and H. Dieter Rombach. 1988. The TAME project: Towards improvement-oriented software environments. IEEE Trans. Soft. Eng. 14, 6 (1988), 758–773.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Juliet Corbin and Anselm Strauss. 2008. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory (3 ed.). Sage Publications.Google ScholarGoogle Scholar
  4. Oscar Dieste and Natalia Juristo. 2013. Challenges of Evaluating the Quality of Software Engineering Experiments. Springer Berlin Heidelberg, 159–177.Google ScholarGoogle Scholar
  5. Tore Dybå, Vigdis By Kampenes, and Dag I.K. Sjøberg. 2006. A systematic review of statistical power in software engineering experiments. Information and Software Technology 48, 8 (2006), 745–755.Google ScholarGoogle ScholarCross RefCross Ref
  6. Hamzeh Eyal-Salman, Abdelhak-Djamel Seriai, and Christophe Dony. 2014. Feature location in a collection of product variants: Combining information retrieval and hierarchical clustering. In SEKE. 426–430.Google ScholarGoogle Scholar
  7. Eduardo Figueiredo, Alessandro Garcia, Marcelo Maia, Gabriel Ferreira, Camila Nunes, and Jon Whittle. 2011. On the impact of crosscutting concern projection on code measurement. In AOSD. ACM, 81–92.Google ScholarGoogle Scholar
  8. Viviane R. Furtado, Henrique Vignando, Victor França, and Edson OliveiraJr. 2019. Comparing Approaches for Quality Evaluation of Software Engineering Experiments: An Empirical Study on Software Product Line Experiments. Journal of Computer Science 15, 10 (2019), 1396–1429.Google ScholarGoogle ScholarCross RefCross Ref
  9. Andreas Jedlitschka, Marcus Ciolkowski, and Dietmar Pfahl. 2008. Reporting Experiments in Software Engineering. Springer London, 201–228.Google ScholarGoogle Scholar
  10. Ian Jolliffe. 2011. Principal component analysis. Springer, 1094–1096.Google ScholarGoogle Scholar
  11. Derek M. Jones. 2020. Evidence-based Software Engineering(1 ed.). Knowledge Software, Ltd. 454 pages.Google ScholarGoogle Scholar
  12. Natalia Juristo and Omar S. Gómez. 2012. Replication of Software Engineering Experiments. Springer Berlin Heidelberg, Berlin, Heidelberg, 60–88.Google ScholarGoogle Scholar
  13. Vigdis Kampenes. 2007. Quality of design, analysis and reporting of software engineering experiments: A systematic review. Ph. D. Dissertation. Department of Informatics, Faculty of Mathematics and Natural Sciences, University of Oslo.Google ScholarGoogle Scholar
  14. Barbara Kitchenham, Hiyam Al-Khilidar, Muhammed Ali Babar, Mike Berry, Karl Cox, Jacky Keung, Felicia Kurniawati, Mark Staples, He Zhang, and Liming Zhu. 2008. Evaluating guidelines for reporting empirical software engineering studies. Empirical Software Engineering 13, 1 (2008), 97–121.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. B. Kitchenham and S. Charters. 2007. Guidelines for performing Systematic Literature Reviews in Software Engineering. Technical Report EBSE 2007-001. Keele University and Durham University Joint Report.Google ScholarGoogle Scholar
  16. Barbara Kitchenham, Dag I. K. Sjøberg, O. Pearl Brereton, David Budgen, Tore Dybå, Martin Höst, Dietmar Pfahl, and Per Runeson. 2010. Can We Evaluate the Quality of Software Engineering Experiments?. In ESEM. 1–8.Google ScholarGoogle Scholar
  17. Barbara Ann Kitchenham, David Budgen, and Pearl Brereton. 2016. Evidence-Based Software engineering and systematic reviews. Vol. 4. CRC Press.Google ScholarGoogle Scholar
  18. Barbara A. Kitchenham, Shari Lawrence Pfleeger, Lesley M. Pickard, Peter W. Jones, David C. Hoaglin, Khaled El Emam, and Jarrett Rosenberg. 2002. Preliminary guidelines for empirical research in software engineering. IEEE Transactions on software engineering 28, 8 (2002), 721–734.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Xiaoli Lian LiZhang. 2014. An Evolutionary Methodology for Optimized Feature Selection in Software Product Lines. In SEKE. 63–66.Google ScholarGoogle Scholar
  20. Roberto E. Lopez-Herrejon, Lukas Linsbauer, José A. Galindo, José A. Parejo, David Benavides, Sergio Segura, and Alexander Egyed. 2015. An assessment of search-based techniques for reverse engineering feature models. Journal of Systems and Software 103 (2015), 353–369.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Ivan do Carmo Machado, Paulo Anselmo da Mota Silveira Neto, Eduardo Santana de Almeida, and Silvio Romero de Lemos Meira. 2011. RiPLE-TE: A Process for Testing Software Product Lines. In SEKE. 711–716.Google ScholarGoogle Scholar
  22. Kai Petersen, Sairam Vakkalanka, and Ludwik Kuzniarz. 2015. Guidelines for conducting systematic mapping studies in software engineering: An update. Information and Software Technology 64 (2015), 1–18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Iris Reinhartz-Berger, Kathrin Figl, and Øystein Haugen. 2014. Comprehending feature models expressed in CVL. In MODELS. Springer, 501–517.Google ScholarGoogle Scholar
  24. Rolando P. Reyes, Oscar Dieste, Efraín R. Fonseca, and Natalia Juristo. 2018. Statistical Errors in Software Engineering Experiments: A Preliminary Literature Review. In ICSE. 1195––1206.Google ScholarGoogle Scholar
  25. Márcio Ribeiro, Paulo Borba, and Christian Kästner. 2014. Feature Maintenance with Emergent Interfaces. In ICSE. ACM, 989––1000.Google ScholarGoogle Scholar
  26. Ildevana Poltronieri Rodrigues, Ana Paula Terra Bacelo, Milene Selbach Silveira, Márcia de Borba Campos, and Elder Macedo Rodrigues. 2016. Evaluating the Representation of User Interface Elements in Feature Models: an Empirical Study. In SEKE. 628–633.Google ScholarGoogle Scholar
  27. Alcemir Rodrigues Santos, Ivan do Carmo Machado, and Eduardo Santana de Almeida. 2016. RiPLE-HC: Javascript Systems Meets SPL Composition. In SPLC. ACM, 154–163.Google ScholarGoogle Scholar
  28. W. B. Santos, E. S. de Almeida, and S. R. de L. Meira. 2012. TIRT: A Traceability Information Retrieval Tool for Software Product Lines Projects. In EUROMICRO. 93–100.Google ScholarGoogle Scholar
  29. Carolyn B. Seaman. 1999. Qualitative Methods in Empirical Studies of Software Engineering. IEEE Trans. Softw. Eng. 25, 4 (1999), 557–572.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Martin Shepperd, Nemitari Ajienka, and Steve Counsell. 2018. The role and value of replication in empirical software engineering results. Information and Software Technology 99 (2018), 120 – 132.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Forrest Shull, Manoel G. Mendoncça, Victor Basili, Jeffrey Carver, José C. Maldonado, Sandra Fabbri, Guilherme Horta Travassos, and Maria Cristina Ferreira. 2004. Knowledge-sharing issues in experimental software engineering. Empirical Software Engineering 9, 1 (2004), 111–137.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Paulo Anselmo da Mota Silveira Neto, Ivan Machado, Yguarata Cerqueira Cavalcanti, Eduardo Santana de Almeida, Vinicius Cardoso Garcia, and Silvio Romero de Lemos Meira. 2010. A regression testing approach for software product lines architectures. In SBCARS. IEEE, 41–50.Google ScholarGoogle Scholar
  33. Paulo Anselmo da Mota Silveira Neto, Ivan Machado, Yguarata Cerqueira Cavalcanti, Eduardo Santana de Almeida, Vinicius Cardoso Garcia, and Silvio Romero de Lemos Meira. 2012. An experimental study to evaluate a SPL architecture regression testing approach. In IRI. IEEE, 608–615.Google ScholarGoogle Scholar
  34. D. I. K. Sjoberg, B. Anda, E. Arisholm, T. Dyba, M. Jorgensen, A. Karahasanovic, E. F. Koren, and M. Vokac. 2002. Conducting realistic experiments in software engineering. In ISESE. 17–26.Google ScholarGoogle Scholar
  35. Dag I. K. Sjøberg, Tore Dybå, Bente C. D. Anda, and Jo E. Hannay. 2008. Building Theories in Software Engineering. Springer London, London, 312–336.Google ScholarGoogle Scholar
  36. M. Solari. 2013. Identifying Experimental Incidents in Software Engineering Replications. In ESEM. 213–222.Google ScholarGoogle Scholar
  37. Martín Solari, Sira Vegas, and Natalia Juristo. 2018. Content and structure of laboratory packages for software engineering experiments. Information and Software Technology 97 (2018), 64–79.Google ScholarGoogle ScholarCross RefCross Ref
  38. Guilherme Horta Travassos and Márcio Barros. 2003. Contributions of in virtuo and in silico experiments for the future of empirical studies in software engineering. In Workshop on empirical software engineering the future of empirical studies in software engineering. 117–130.Google ScholarGoogle Scholar
  39. Claes Wohlin. 2016. Is There a Future for Empirical Software Engineering?. In ESEM. ACM, New York, NY, USA.Google ScholarGoogle Scholar
  40. Claes Wohlin, Per Runeson, Martin Höst, Magnus C. Ohlsson, Bjöorn Regnell, and Anders Wesslén. 2012. Experimentation in Software Engineering. Springer.Google ScholarGoogle ScholarCross RefCross Ref

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    SBCARS '21: Proceedings of the 15th Brazilian Symposium on Software Components, Architectures, and Reuse
    September 2021
    109 pages
    ISBN:9781450384193
    DOI:10.1145/3483899

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 5 October 2021

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate23of79submissions,29%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format