Abstract
Vulnerable software represents a tremendous threat to modern information systems. Vulnerabilities in widespread applications may be used to spread malware, steal money and conduct target attacks. To address this problem, developers and researchers use different approaches of dynamic and static software analysis; one of these approaches is called fuzzing. Fuzzing is performed by generating and sending potentially malformed data to an application under test. Since first appearance in 1988, fuzzing has evolved a lot, but issues which addressed to effectiveness evaluation have not fully investigated until now.
In our research, we propose a novel approach of fuzzing effectiveness evaluation and improving, taking into account semantics of executed code along with a quantitative assessment. For this purpose, we use specific metrics of source code complexity assessment specially adapted to perform analysis of machine code. We conducted effectiveness evaluation of these metrics on 104 wide-spread applications with known vulnerabilities. As a result of these experiments, we were able to identify the best metrics that is more suitable to find bugs. In addition we proposed a set of open-source tools for improving fuzzing effectiveness. The experimental results of effectiveness assessment have shown viability of our approach and allowed to reduce time costs for fuzzing campaign by an average of 26–28 % for 5 well-known fuzzing systems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
NIST National Vulnerability Database. http://nvd.nist.gov
Balakrishnan, G., Reps, T., Melski, D., Teitelbaum, T.: WYSINWYX: what you see is not what you execute. In: Meyer, B., Woodcock, J. (eds.) VSTTE 2005. LNCS, vol. 4171, pp. 202–213. Springer, Heidelberg (2008)
Sulley Fuzzing Framework. http://code.google.com/p/sulley/
Peach Fuzzing Framework. http://peachfuzzer.com/
Godefroid, P., Levin, M.Y., Molnar, D.: SAGE: whitebox fuzzing for security testing. Queue 10(1), 20 (2012)
Miller, C.: Fuzz by number. In: CanSecWest (2008)
Woo, M., Cha, S.K., Gottlieb, S., Brumley, D.: Scheduling black-box mutational fuzzing. In: Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security, pp. 511–522. ACM (2013)
Duran, D., Weston, D., Miller, M.: Targeted taint driven fuzzing using software metrics. In: CanSecWest (2011)
Weber, I.M.: Evaluation. In: Weber, I.M. (ed.) Semantic Methods for Execution-level Business Process Modeling. LNBIP, vol. 40, pp. 203–225. Springer, Heidelberg (2009)
Banks, G., Cova, M., Felmetsger, V., Almeroth, K.C., Kemmerer, R.A., Vigna, G.: SNOOZE: toward a stateful NetwOrk prOtocol fuzZEr. In: Katsikas, S.K., López, J., Backes, M., Gritzalis, S., Preneel, B. (eds.) ISC 2006. LNCS, vol. 4176, pp. 343–358. Springer, Heidelberg (2006)
Kim, H.C., Choi, Y.H., Lee, D.H.: Efficient file fuzz testing using automated analysis of binary file format. J. Syst. Architect. 57(3), 259–268 (2011)
Takanen, A., Demott, J.D., Miller, C.: Fuzzing for Software Security Testing and Quality Assurance. Artech House, Norwood (2008)
Basili, V.R., Perricone, B.T.: Software errors and complexity: an empirical investigation. Commun. ACM 27(1), 42–52 (1984)
Khoshgoftaar, T.M., Munson, J.C.: Predicting software development errors using software complexity metrics. IEEE J. Sel. Areas Commun. 8(2), 253–261 (1990)
Olague, H.M., Etzkorn, L.H., Gholston, S., Quattlebaum, S.: Empirical validation of three software metrics suites to predict fault-proneness of object-oriented classes developed using highly iterative or agile software development processes. IEEE Trans. Softw. Eng. 33(6), 402–419 (2007)
Abran, A.: Software Metrics and Software Metrology. Wiley-IEEE Computer Society, Hoboken (2010)
McCabe, T.J.: A complexity measure. IEEE Trans. Softw. Eng. 4, 308–320 (1976)
Fenton, N.E., Ptleeger, S.L., Metrics, S.: A Rigorous and Practical Approach, 2nd edn, p. 647. International Thomson Computer Press, London (1997)
Halstead, M.H.: Elements of Software Science, p. 127. Elsevier North-Holland Inc., Amsterdam (1977)
Harrison, W.A., Magel, K.I.: A complexity measure based on nesting level. ACM SIGPLAN Not. 16(3), 63–74 (1981)
Henry, S., Kafura, D.: Software structure metrics based on information flow. IEEE Trans. Softw. Eng. 5, 510–518 (1981)
Card, D., Glass, R.: Measuring Software Design Quality. Prentice Hall, Englewood Cliffs (1990)
Oviedo, E.I.: Control flow, data flow and program complexity. In: Shepperd, M. (ed.) Software Engineering Metrics I, pp. 52–65. McGraw-Hill, Inc., New York (1993)
Chapin, N.: An entropy metric for software maintainability. In: Vol. II: Software Track, Proceedings of the Twenty-Second Annual Hawaii International Conference on System Sciences, vol. 2, pp. 522–523. IEEE (1989)
Lifecycle, S.D.: List of banned syscalls. https://msdn.microsoft.com/en-us/library/bb288454.aspx
Intel Pin: A Dynamic Binary Instrumentation Tool. http://software.intel.com/en-us/articles/pin-a-dynamic-binary-instrumentation-tool
Vulnerable applications, exploits database. http://www.exploit-db.com/
The set of tools, experimental results, the list of selected applications. https://github.com/MShudrak/ida-metrics
Detailed results of experiments for each application. https://goo.gl/3dRMEx
Zzuf fuzzer. http://caca.zoy.org/wiki/zzuf
CERT fuzzer. https://www.cert.org/vulnerability-analysis/tools/bff.cfm?
Newsome, J., Song, D.: Dynamic taint analysis for automatic detection, analysis, and signature generation of exploits on commodity software (2005)
Godefroid, P., Kiezun, A., Levin, M.Y.: Grammar-based whitebox fuzzing. ACM SIGPLAN Not. 43(6), 206–215 (2008). ACM
Schwartz, E.J., Avgerinos, T., Brumley, D.: All you ever wanted to know about dynamic taint analysis and forward symbolic execution (but might have been afraid to ask). In: 2010 IEEE Symposium on Security and Privacy (SP), pp. 317–331. IEEE (2010)
Ganesh, V., Leek, T., Rinard, M.: Taint-based directed whitebox fuzzing. In: IEEE 31st International Conference on Software Engineering, ICSE 2009, pp. 474–484. IEEE (2009)
Sparks, S., Embleton, S., Cunningham, R., Zou, C.: Automated vul-nerability analysis: leveraging control flow for evolutionary input crafting. In: Twenty-Third Annual Computer Security Applications Conference, ACSAC 2007, pp. 477–486. IEEE (2007)
Seagle Jr., R.L.: A framework for file format fuzzing with genetic algorithms. Ph.D. thesis, University of Tennessee, Knoxville (2012)
Myers, G.J., Sandler, C., Badgett, T.: The Art of Software Testing. Wiley, Hoboken (2011)
Clarke, L.A., Podgurski, A., Richardson, D.J., Zeil, S.J.: A formal evaluation of data flow path selection criteria. IEEE Trans. Softw. Eng. 15(11), 1318–1332 (1989)
Tsankov, P., Dashti, M.T., Basin, D.: Semi-valid input coverage for fuzz testing. In: Proceedings of the 2013 International Symposium on Software Testing and Analysis, pp. 56–66. ACM (2013)
Iozzo, V.: 0-knowledge fuzzing. http://resources.sei.cmu.edu/asset files/WhitePaper/2010_ 019_001_53555.pdf
Rebert, A., Cha, S.K., Avgerinos, T., Foote, J., Warren, D., Grieco, G., Brumley, D.: Optimizing seed selection for fuzzing. In: Proceedings of the USENIX Security Symposium, pp. 861–875 (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
A Appendix: Adapted Metrics List
A Appendix: Adapted Metrics List
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Shudrak, M.O., Zolotarev, V.V. (2016). Improving Fuzzing Using Software Complexity Metrics. In: Kwon, S., Yun, A. (eds) Information Security and Cryptology - ICISC 2015. ICISC 2015. Lecture Notes in Computer Science(), vol 9558. Springer, Cham. https://doi.org/10.1007/978-3-319-30840-1_16
Download citation
DOI: https://doi.org/10.1007/978-3-319-30840-1_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-30839-5
Online ISBN: 978-3-319-30840-1
eBook Packages: Computer ScienceComputer Science (R0)