skip to main content
10.1145/1806799.1806864acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Using symbolic evaluation to understand behavior in configurable software systems

Published:01 May 2010Publication History

ABSTRACT

Many modern software systems are designed to be highly configurable, which increases flexibility but can make programs hard to test, analyze, and understand. We present an initial empirical study of how configuration options affect program behavior. We conjecture that, at certain levels of abstraction, configuration spaces are far smaller than the worst case, in which every configuration is distinct. We evaluated our conjecture by studying three configurable software systems: vsftpd, ngIRCd, and grep. We used symbolic evaluation to discover how the settings of run-time configuration options affect line, basic block, edge, and condition coverage for our subjects under a given test suite. Our results strongly suggest that for these subject programs, test suites, and configuration options, when abstracted in terms of the four coverage criteria above, configuration spaces are in fact much smaller than combinatorics would suggest and are effectively the composition of many small, self-contained groupings of options.

References

  1. R. Brownlie, J. Prowse, and M. S. Phadke. Robust testing of AT&T PMX/StarMAIL using OATS. AT&T Technical Journal, 71(3):41--7, 1992.Google ScholarGoogle ScholarCross RefCross Ref
  2. C. Cadar, D. Dunbar, and D. R. Engler. KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs. In OSDI, pages 209--224, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. C. Cadar, V. Ganesh, P. M. Pawlowski, D. L. Dill, and D. R. Engler. EXE: automatically generating inputs of death. In CCS, pages 322--335, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. D. M. Cohen, S. R. Dalal, M. L. Fredman, and G. C. Patton. The AETG system: an approach to testing based on combinatorial design. TSE, 23(7):437--44, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. I. S. Dunietz, W. K. Ehrlich, B. D. Szablak, C. L. M. ws, and A. Iannino. Applying design of experiments to software testing. In ICSE, pages 205--215, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. V. Ganesh and D. L. Dill. A decision procedure for bit-vectors and arrays. In CAV, July 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. P. Godefroid, N. Klarlund, and K. Sen. DART: directed automated random testing. In PLDI, pages 213--223, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. P. Godefroid, M. Y. Levin, and D. A. Molnar. Automated whitebox fuzz testing. In NDSS. Internet Society, 2008.Google ScholarGoogle Scholar
  9. J. C. King. Symbolic execution and program testing. Commun. ACM, 19(7):385--394, 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. D. Kuhn and M. Reilly. An investigation of the applicability of design of experiments to software testing. In NASA Goddard/IEEE Software Engineering Workshop, pages 91--95, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. G. C. Necula, S. McPeak, S. P. Rahul, and W. Weimer. CIL: Intermediate language and tools for analysis and transformation of C programs. In CC, pages 213--228, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. A. Porter, C. Yilmaz, A. M. Memon, D. C. Schmidt, and B. Natarajan. Skoll: A process and infrastructure for distributed continuous quality assurance. TSE, 33(8):510--525, August, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. E. Reisner, C. Song, K.-K. Ma, J. S. Foster, and A. Porter. Using symbolic evaluation to understand behavior in configurable software systems. Technical Report CS-TR-4946, Department of Computer Science, University of Maryland, 2009.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    ICSE '10: Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1
    May 2010
    627 pages
    ISBN:9781605587196
    DOI:10.1145/1806799

    Copyright © 2010 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 1 May 2010

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

    Acceptance Rates

    Overall Acceptance Rate276of1,856submissions,15%

    Upcoming Conference

    ICSE 2025

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader