skip to main content
10.1145/2635868.2635872acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article

How we get there: a context-guided search strategy in concolic testing

Authors Info & Claims
Published:11 November 2014Publication History

ABSTRACT

One of the biggest challenges in concolic testing, an automatic test generation technique, is its huge search space. Concolic testing generates next inputs by selecting branches from previous execution paths. However, a large number of candidate branches makes a simple exhaustive search infeasible, which often leads to poor test coverage. Several search strategies have been proposed to explore high-priority branches only. Each strategy applies different criteria to the branch selection process but most do not consider context, how we got to the branch, in the selection process. In this paper, we introduce a context-guided search (CGS) strategy. CGS looks at preceding branches in execution paths and selects a branch in a new context for the next input. We evaluate CGS with two publicly available concolic testing tools, CREST and CarFast, on six C subjects and six Java subjects. The experimental results show that CGS achieves the highest coverage of all twelve subjects and reaches a target coverage with a much smaller number of iterations on most subjects than other strategies.

References

  1. A. V. Aho, M. S. Lam, R. Sethi, and J. D. Ullman. Compilers: Principles, Techniques, and Tools. Addison Wesley, 2nd edition, Sept. 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. F. E. Allen and J. Cocke. Graph theoretic constructs for program control flow analysis. Technical Report IBM Res. Rep. RC 3923, IBM T.J. Watson Research Center, 1972.Google ScholarGoogle Scholar
  3. S. Anand, E. Burke, T. Y. Chen, J. Clark, M. B. Cohen, W. Grieskamp, M. Harman, M. J. Harrold, P. McMinn, A. Bertolino, J. J. Li, and H. Zhu. An orchestrated survey on automated software test case generation. Journal of Systems and Software, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. S. Anand, P. Godefroid, and N. Tillmann. Demand-driven compositional symbolic execution. In C. R. Ramakrishnan and J. Rehof, editors, Tools and Algorithms for the Construction and Analysis of Systems, number 4963 in Lecture Notes in Computer Science, pages 367–381. Springer Berlin Heidelberg, Jan. 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. P. Boonstoppel, C. Cadar, and D. Engler. RWset: attacking path explosion in constraint-based test generation. In C. R. Ramakrishnan and J. Rehof, editors, Tools and Algorithms for the Construction and Analysis of Systems, number 4963 in Lecture Notes in Computer Science, pages 351–366. Springer Berlin Heidelberg, Jan. 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. J. Burnim and K. Sen. Heuristics for scalable dynamic test generation. In Automated Software Engineering, 2008. ASE 2008. 23rd IEEE/ACM International Conference on, pages 443–446, Sept. 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. J. Burnim and K. Sen. Heuristics for scalable dynamic test generation. Technical Report EECS-2008-123, Berkeley University, 2008.Google ScholarGoogle Scholar
  8. C. Cadar, D. Dunbar, and D. Engler. KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs. In Proceedings of the 8th USENIX conference on Operating systems design and implementation, OSDI’08, pages 209–224, Berkeley, CA, USA, 2008. USENIX Association. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. C. Cadar, V. Ganesh, P. M. Pawlowski, D. L. Dill, and D. R. Engler. EXE: automatically generating inputs of death. ACM Trans. Inf. Syst. Secur., 12(2):10:1–10:38, Dec. 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. C. Cadar and K. Sen. Symbolic execution for software testing: Three decades later. Commun. ACM, 56(2):82–90, Feb. 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. T. Chen, X.-s. Zhang, S.-z. Guo, H.-y. Li, and Y. Wu. State of the art: Dynamic symbolic execution for automated test generation. Future Generation Computer Systems, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. L. A. Clarke. A program testing system. In Proceedings of the 1976 annual conference, ACM ’76, pages 488–491, New York, NY, USA, 1976. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. CREST. Automatic test generation tool for C. https://code.google.com/p/crest/.Google ScholarGoogle Scholar
  14. P. Garg, F. Ivancic, G. Balakrishnan, N. Maeda, and A. Gupta. Feedback-directed unit test generation for C/C++ using concolic execution. In Proceedings of the 2013 International Conference on Software Engineering, ICSE ’13, pages 132–141, Piscataway, NJ, USA, 2013. IEEE Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. P. Godefroid and J. Kinder. Proving memory safety of floating-point computations by combining static and dynamic program analysis. In Proceedings of the 19th international symposium on Software testing and analysis, ISSTA ’10, pages 1–12, New York, NY, USA, 2010. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. P. Godefroid, N. Klarlund, and K. Sen. DART: directed automated random testing. In Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation, PLDI ’05, pages 213–223, New York, NY, USA, 2005. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. P. Godefroid, M. Levin, D. Molnar, et al. Automated whitebox fuzz testing. In NDSS, 2008.Google ScholarGoogle Scholar
  18. P. Godefroid, A. V. Nori, S. K. Rajamani, and S. D. Tetali. Compositional may-must program analysis: Unleashing the power of alternation. In Proceedings of the 37th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL ’10, pages 43–56, New York, NY, USA, 2010. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. J. Jaffar, V. Murali, and J. A. Navas. Boosting concolic testing via interpolation. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2013, pages 48–58, New York, NY, USA, 2013. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. J. B. Kam and J. D. Ullman. Global data flow analysis and iterative algorithms. J. ACM, 23(1):158–171, Jan. 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Y. Kim, M. Kim, Y. J. Kim, and Y. Jang. Industrial application of concolic testing approach: A case study on libexif by using CREST-BV and KLEE. In 2012 34th International Conference on Software Engineering (ICSE), pages 1143–1152, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. J. C. King. Symbolic execution and program testing. Commun. ACM, 19(7):385–394, July 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Y. Li, Z. Su, L. Wang, and X. Li. Steering symbolic execution to less traveled paths. In Proceedings of the 2013 ACM SIGPLAN International Conference on Object Oriented Programming Systems Languages & Applications, OOPSLA ’’13, pages 19–32, New York, NY, USA, 2013. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. A. Machiry, R. Tahiliani, and M. Naik. Dynodroid: An input generation system for android apps. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2013, pages 224–234, New York, NY, USA, 2013. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. R. Majumdar and K. Sen. Hybrid concolic testing. In Software Engineering, 2007. ICSE 2007. 29th International Conference on, pages 416–426, May 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. P. D. Marinescu and C. Cadar. KATCH: high-coverage testing of software patches. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2013, pages 235–245, New York, NY, USA, 2013. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. P. McMinn. Search-based software test data generation: a survey. Software Testing, Verification and Reliability, 14(2):105–156, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. C. Pacheco and M. D. Ernst. Randoop: feedback-directed random testing for java. In Companion to the 22nd ACM SIGPLAN conference on Object-oriented programming systems and applications companion, OOPSLA ’07, pages 815–816, New York, NY, USA, 2007. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. S. Park, B. M. M. Hossain, I. Hussain, C. Csallner, M. Grechanik, K. Taneja, C. Fu, and Q. Xie. CarFast: achieving higher statement coverage faster. In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering, FSE ’12, pages 35:1–35:11, New York, NY, USA, 2012. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. N. M. Razali and Y. B. Wah. Power comparisons of shapiro-wilk, kolmogorov-smirnov, lilliefors and anderson-darling tests. Journal of Statistical Modeling and Analytics, 2(1):21–33, 2011.Google ScholarGoogle Scholar
  31. K. Sen, D. Marinov, and G. Agha. CUTE: a concolic unit testing engine for c. In Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering, ESEC/FSE-13, pages 263–272, New York, NY, USA, 2005. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. SVCOMP. 2014 - competition on software verification. http://sv-comp.sosy-lab.org/2014/benchmarks.php.Google ScholarGoogle Scholar
  33. E. J. Weyuker. Translatability and decidability questions for restricted classes of program schemas. SIAM Journal on Computing, 8(4):587–598, Nov. 1979.Google ScholarGoogle ScholarCross RefCross Ref
  34. T. Xie, N. Tillmann, J. de Halleux, and W. Schulte. Fitness-guided path exploration in dynamic symbolic execution. In Dependable Systems Networks, 2009. DSN ’09. IEEE/IFIP International Conference on, pages 359–368, July 2009.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. How we get there: a context-guided search strategy in concolic testing

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      FSE 2014: Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering
      November 2014
      856 pages
      ISBN:9781450330565
      DOI:10.1145/2635868

      Copyright © 2014 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 11 November 2014

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate17of128submissions,13%

      Upcoming Conference

      FSE '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader