skip to main content
10.1145/2517208.2517215acmconferencesArticle/Chapter ViewAbstractPublication PagesgpceConference Proceedingsconference-collections
research-article

Does the discipline of preprocessor annotations matter?: a controlled experiment

Published:27 October 2013Publication History

ABSTRACT

The C preprocessor (CPP) is a simple and language-independent tool, widely used to implement variable software systems using conditional compilation (i.e., by including or excluding annotated code). Although CPP provides powerful means to express variability, it has been criticized for allowing arbitrary annotations that break the underlying structure of the source code. We distinguish between disciplined annotations, which align with the structure of the source code, and undisciplined annotations, which do not. Several studies suggest that especially the latter type of annotations makes it hard to (automatically) analyze the code. However, little is known about whether the type of annotations has an effect on program comprehension. We address this issue by means of a controlled experiment with human subjects. We designed similar tasks for both, disciplined and undisciplined annotations, to measure program comprehension. Then, we measured the performance of the subjects regarding correctness and response time for solving the tasks. Our results suggest that there are no differences between disciplined and undisciplined annotations from a program-comprehension perspective. Nevertheless, we observed that finding and correcting errors is a time-consuming and tedious task in the presence of preprocessor annotations.

References

  1. T. Anderson and J. Finn. The New Statistical Analysis of Data. Springer, 1996.Google ScholarGoogle ScholarCross RefCross Ref
  2. B. Baker. On Finding Duplication and Near-Duplication in Large Software Systems. In Proc. Work. Conf. Reverse Engineering (WCRE), pages 86--95. IEEE, 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. I. Baxter and M. Mehlich. Preprocessor Conditional Removal by Simple Partial Evaluation. In Proc. Work. Conf. Reverse Engineering (WCRE), pages 281--290. IEEE, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. G. Camilli and K. D. Hopkins. Applicability of Chi-square to 2 x 2 Contingency Tables with Small Expected Cell Frequencies. Psychological Bulletin, 85(1):163, 1978.Google ScholarGoogle ScholarCross RefCross Ref
  5. A. Dunsmore and M. Roper. A Comparative Evaluation of Program Comprehension Measures. Journal Sys. and Soft. (JSS), 52(3):121--129, 2000.Google ScholarGoogle Scholar
  6. M. Ernst, G. Badros, and D. Notkin. An Empirical Analysis of C Preprocessor Use. IEEE Trans. Software Engineering (TSE), 28(12):1146--1170, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. J.-M. Favre. The CPP Paradox. In Proc. European Workshop Software Maintenance, 1995. http://equipes-lig.imag.fr/adele/Les.Publications/intConferences/EWSM91995Fav.pdf.Google ScholarGoogle Scholar
  8. J.-M. Favre. Understanding-In-The-Large. In Int. Workshop Program Comprehension (IWPC), pages 29--38. IEEE, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. J. Feigenspan, C. Kästner, S. Apel, J. Liebig, M. Schulze, R. Dachselt, M. Papendieck, T. Leich, and G. Saake. Do background colors improve program comprehension in the #ifdef hell? Empirical Software Engineering, pages 1--47, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. J. Feigenspan, C. Kästner, J. Liebig, S. Apel, and S. Hanenberg. Measuring Programming Experience. In Proc. Int. Conf. Program Comprehension (ICPC), pages 73--82. IEEE, 2012.Google ScholarGoogle ScholarCross RefCross Ref
  11. J. Feigenspan, M. Schulze, M. Papendieck, C. Kästner, R. Dachselt, V. Köppen, and M. Frisch. Using Background Colors to Support Program Comprehension in Software Product Lines. In Proc. Int. Conf. Evaluation and Assessment in Software Engineering (EASE), pages 66--75. Institution of Engineering and Technology, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  12. J. Feigenspan, M. Schulze, M. Papendieck, C. Kästner, R. Dachselt, V. Köppen, M. Frisch, and G. Saake. Supporting Program Comprehension in Large Preprocessor-Based Software Product Lines. IET Software, 6(6):488--501, 2012.Google ScholarGoogle ScholarCross RefCross Ref
  13. J. Feigenspan, N. Siegmund, A. Hasselberg, and M. Köppen. PROPHET: Tool Infrastructure to Support Program Comprehension Experiments. In Proc. Int. Symp. Empirical Software Engineering and Measurement (ESEM), 2011. Poster.Google ScholarGoogle Scholar
  14. A. Garrido and R. Johnson. Challenges of Refactoring C Programs. In Proc. Int. Workshop Principles of Software Evolution (IWPSE), pages 6--14. ACM, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. A. Garrido and R. Johnson. Refactoring C with Conditional Compilation. In Proc. Int. Conf. Automated Software Engineering (ASE), pages 323--326. IEEE, 2003.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. A. Garrido and R. Johnson. Analyzing Multiple Configurations of a C Program. In Proc. Int. Conf. Software Maintenance (ICSM), pages 379--388. IEEE, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. N. Göde and J. Harder. Clone Stability. In Proc. European Conf. Software Maintenance and Reengineering (CSMR), pages 65--74. IEEE, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. C. Goodwin. Research in Psychology: Methods and Design. Wiley Publishing, Inc., second edition, 1999.Google ScholarGoogle Scholar
  19. A. Jedlitschka, M. Ciolkowski, and D. Pfahl. Reporting Experiments in Software Engineering. In Guide to Advanced Empirical Software Engineering, pages 201--228. Springer, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  20. E. Jürgens, F. Deissenböck, B. Hummel, and S. Wagner. Do Code Clones Matter? In Proc. Int. Conf. Software Engineering (ICSE), pages 485--495. IEEE, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. C. Kapser and M. W. Godfrey. "Cloning Considered Harmful" Considered Harmful. In Proc. Work. Conf. Reverse Engineering (WCRE), pages 19--28. IEEE, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. C. Kästner, S. Apel, and M. Kuhlemann. Granularity in Software Product Lines. In Proc. Int. Conf. Software Engineering (ICSE), pages 311--320. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. C. Kästner, S. Apel, S. Trujillo, M. Kuhlemann, and D. Batory. Guaranteeing Syntactic Correctness for all Product Line Variants: A Language-Independent Approach. In Proc. Int. Conf. Objects, Models, Components, Patterns (TOOLS), pages 174--194. Springer, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  24. T. D. LaToza, G. Venolia, and R. DeLine. Maintaining Mental Models: A Study of Developer Work Habits. In Proc. Int. Conf. Software Engineering (ICSE), pages 492--501. ACM, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. D. Le, E. Walkingshaw, and M. Erwig. #ifdef Confirmed Harmful: Promoting Understandable Software Variation. In Proc. IEEE Symp. Visual Languages and Human-Centric Computing (VL/HCC), pages 143--150. IEEE, 2011.Google ScholarGoogle Scholar
  26. J. Liebig, S. Apel, C. Lengauer, C. Kästner, and M. Schulze. An Analysis of the Variability in Forty Preprocessor-Based Software Product Lines. In Proc. Int. Conf. Software Engineering (ICSE), pages 105--114. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. J. Liebig, C. Kästner, and S. Apel. Analyzing the Discipline of Preprocessor Annotations in 30 Million Lines of C Code. In Proc. Int. Conf. Aspect-Oriented Software Development (AOSD), pages 191--202. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. R. Likert. A Technique for the Measurement of Attitudes. Archives of Psychology, 140:1--55, 1932.Google ScholarGoogle Scholar
  29. F. Medeiros, M. Ribeiro, and R. Gheyi. Investigating Preprocessor-Based Syntax Errors. In Proc. Int. Conf. Generative Programming and Component Engineering (GPCE). ACM, 2013. to appear. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. C. Roy and J. Cordy. A Survey on Software Clone Detection Research. Technical Report 2007-541, Queen's University at Kingston, 2007.Google ScholarGoogle Scholar
  31. S. Schulze, E. Jürgens, and J. Feigenspan. Analyzing the Effect of Preprocessor Annotations on Code Clones. In Proc. Work. Conf. Source Code Analysis and Manipulation (SCAM), pages 115--124. IEEE, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. W. R. Shadish, T. D. Cook, and D. T. Campbell. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Houghton Mifflin Company, 2002.Google ScholarGoogle Scholar
  33. J. Siegmund. Framework for Measuring Program Comprehension. PhD thesis, University of Magdeburg, 2012.Google ScholarGoogle Scholar
  34. H. Spencer and G. Collyer. #ifdef Considered Harmful, or Portability Experience with C News. In Proc. USENIX Technical Conf., pages 185--197. USENIX Association Berkeley, 1992.Google ScholarGoogle Scholar
  35. M. Svahnberg, A. Aurum, and C. Wohlin. Using Students as Subjects - An Empirical Evaluation. In Proc. Int. Symp. Empirical Software Engineering and Measurement (ESEM), pages 288--290. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. J. Yellott. Correction for Fast Guessing and the Speed Accuracy Trade-off in Choice Reaction Time. Journal of Mathematical Psychology, 8:159--199, 1971.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Does the discipline of preprocessor annotations matter?: a controlled experiment

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      GPCE '13: Proceedings of the 12th international conference on Generative programming: concepts & experiences
      October 2013
      198 pages
      ISBN:9781450323734
      DOI:10.1145/2517208

      Copyright © 2013 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 27 October 2013

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      GPCE '13 Paper Acceptance Rate20of59submissions,34%Overall Acceptance Rate56of180submissions,31%

      Upcoming Conference

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader