skip to main content
10.1145/3053600.3053636acmconferencesArticle/Chapter ViewAbstractPublication PagesicpeConference Proceedingsconference-collections
research-article

Towards Holistic Continuous Software Performance Assessment

Published:18 April 2017Publication History

ABSTRACT

In agile, fast and continuous development lifecycles, software performance analysis is fundamental to confidently release continuously improved software versions. Researchers and industry practitioners have identified the importance of integrating performance testing in agile development processes in a timely and efficient way. However, existing techniques are fragmented and not integrated taking into account the heterogeneous skills of the users developing polyglot distributed software, and their need to automate performance practices as they are integrated in the whole lifecycle without breaking its intrinsic velocity. In this paper we present our vision for holistic continuous software performance assessment, which is being implemented in the BenchFlow tool. BenchFlow enables performance testing and analysis practices to be pervasively integrated in continuous development lifecycle activities. Users can specify performance activities (e.g., standard performance tests) by relying on an expressive Domain Specific Language for objective-driven performance analysis. Collected performance knowledge can be thus reused to speed up performance activities throughout the entire process.

References

  1. W. Afzal, R. Torkar, and R. Feldt. A systematic review of search-based testing for non-functional system properties. IST, 51(6):957--976, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. T. Ahmad and D. Truscan. Automatic performance space exploration of web applications using genetic algorithms. In SAC '16, pages 795--800. ACM, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. ASQ. Continuous Improvement. http://asq.org/learn-about-quality/continuous-improvement/overview/overview.html.Google ScholarGoogle Scholar
  4. M. Bernardino, E. M. Rodrigues, and A. F. Zorzo. Performance testing modeling. In SAC '16, pages 1660--1665. ACM Press, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. M. Bernardino, A. F. Zorzo, and E. M. Rodrigues. Canopus: A Domain-Specific Language for Modeling Performance Testing. In ICSEA, pages 157--167, 2014.Google ScholarGoogle Scholar
  6. Blazemeter. Taurus: Automation-friendly framework for Continuous Testing. http://gettaurus.org.Google ScholarGoogle Scholar
  7. A. B. Bondi. Foundations of Software and System Performance Engineering. Process, Performance Modeling, Requirements, Testing, Scalability, and Practice. Addison-Wesley Professional, 2014.Google ScholarGoogle Scholar
  8. J. Bosch. Continuous Software Engineering. Springer, 2014. Google ScholarGoogle ScholarCross RefCross Ref
  9. A. Brunnert, A. van Hoorn, F. Willnecker, et al. Performance-oriented DevOps: A Research Agenda. 2015.Google ScholarGoogle Scholar
  10. S. Di Alesio, S. Nejati, L. C. Briand, et al. Stress testing of task deadlines - A constraint programming approach. ISSRE, 2013. Google ScholarGoogle ScholarCross RefCross Ref
  11. DZONE. State of DevOps Report. Technical report, 2016.Google ScholarGoogle Scholar
  12. A. Faisal, D. Petriu, and M. Woodside. A Systematic Approach for Composing General Middleware Completions to Performance Models. In Fundamental Approaches to Software Engineering, pages 30--44. Springer, 2014. Google ScholarGoogle ScholarCross RefCross Ref
  13. V. Ferme, A. Ivanchikj, and C. Pautasso. A Framework for Benchmarking BPMN 2.0 Workflow Management Systems. BPM, 9253(Chapter 18):251--259, 2015.Google ScholarGoogle Scholar
  14. V. Ferme, A. Ivanchikj, C. Pautasso, et al. A Container-centric Methodology for Benchmarking Workflow Management Systems. CLOSER, 2:74--84, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. B. Fitzgerald and K.-J. Stol. Continuous software engineering: A roadmap and agenda. Journal of Systems and Software, 123:176--189, 2017. Google ScholarGoogle ScholarCross RefCross Ref
  16. M. Fowler. Domain-Specific Languages. Pearson, 2010.Google ScholarGoogle Scholar
  17. R. Gao, Z. M. Jiang, C. Barna, et al. A Framework to Evaluate the Effectiveness of Different Load Testing Analysis Techniques. In ICST, pages 22--32. IEEE, 2016. Google ScholarGoogle ScholarCross RefCross Ref
  18. I. P. Gent. The Recomputation Manifesto. 2013.Google ScholarGoogle Scholar
  19. M. Gualtieri and G. ODonnell. Augment DevOps With NoOps, 2016.Google ScholarGoogle Scholar
  20. M. Hauck, M. Kuperberg, N. Huber, et al. Deriving performance-relevant infrastructure properties through model-based experiments with Ginpex. Softw Syst Model, 13(4):1345--1365, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. D. Hollingsworth. The Workflow Reference Model. Technical report, 1995.Google ScholarGoogle Scholar
  22. V. Horký, F. Haas, J. Kotrč, et al. Performance Regression Unit Testing: A Case Study. In Relationship of DevOps to Agile, Lean and Continuous Deployment, pages 149--163. Springer, 2013. Google ScholarGoogle ScholarCross RefCross Ref
  23. J. Humble and D. Farley. Continuous Delivery. Reliable Software Releases through Build, Test, and Deployment Automation. Pearson, 2010.Google ScholarGoogle Scholar
  24. R. Jabbari, N. bin Ali, K. Petersen, et al. What is DevOps?: A Systematic Mapping Study on Definitions and Practices. In XP '16 Workshops, pages 12--21. ACM, 2016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. T. Kalibera and P. Tůma. Precise Regression Benchmarking with Random Effects: Improving Mono Benchmark Results. In Formal Methods and Stochastic Models for Performance Evaluation, pages 63--77. Springer, 2006.Google ScholarGoogle Scholar
  26. Y. Koh, R. Knauerhase, P. Brett, et al. An Analysis of Performance Interference Effects in Virtual Environments. In ISPASS, pages 200--209. IEEE, 2007. Google ScholarGoogle ScholarCross RefCross Ref
  27. R. Kolb, D. Ganesan, D. Muthig, et al. Goal-Oriented Performance Analysis of Reusable Software Components. ICSR, 4039(27):368--381, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. H. Koziolek. Performance evaluation of component-based software systems: A survey. Performance Evaluation, 67(8):634--658, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. E. I. Laukkanen, J. Itkonen, and C. Lassenius. Problems, causes and solutions when adopting continuous delivery - A systematic literature review. IST, 82:55--79, 2017. Google ScholarGoogle ScholarCross RefCross Ref
  30. M. Mernik, J. Heering, and A. M. Sloane. When and how to develop domain-specific languages. ACM Computing Surveys, 37(4):316--344, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. T. M. Mitchell. Machine learning. McGraw Hill series in computer science, 1997.Google ScholarGoogle Scholar
  32. I. Molyneaux. The Art of Application Performance Testing. From Strategy to Tools. O'Reilly Media, Inc., 2014.Google ScholarGoogle Scholar
  33. PractiTest. State of Testing Report. Technical report, 2016.Google ScholarGoogle Scholar
  34. K.-T. Rehmann, C. Seo, D. Hwang, et al. Performance Monitoring in SAP HANA's Continuous Integration Process. SIGMETRICS Perf. Eval. Rev., 43(4):43--52, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. S. Reis, A. Metzger, and K. Pohl. A reuse technique for performance testing of software product lines. 2006.Google ScholarGoogle Scholar
  36. D. Rudolph and G. Stitt. An interpolation-based approach to multi-parameter performance modeling for heterogeneous systems. ASAP, 2015-September:174--180, 2015.Google ScholarGoogle ScholarCross RefCross Ref
  37. B. M. Rutherford and L. P. Swiler. Response Surface (Meta-model) Methods and Applications. In ICPE'13, 2006.Google ScholarGoogle Scholar
  38. G. K. Sandve, A. Nekrutenko, J. Taylor, et al. Ten Simple Rules for Reproducible Computational Research. PLOS Computational Biology, 9(10):e1003285, 2013. Google ScholarGoogle ScholarCross RefCross Ref
  39. A. Sarkar, J. Guo, N. Siegmund, et al. Cost-Efficient Sampling for Performance Prediction of Configurable Systems (T). In ASE, pages 342--352. IEEE, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. G. Schermann, J. Cito, P. Leitner, et al. Towards quality gates in continuous delivery and deployment. In ICPC, pages 1--4. IEEE, 2016. Google ScholarGoogle ScholarCross RefCross Ref
  41. K. Spafford and J. S. Vetter. Aspen - a domain specific language for performance modeling. SC, page 84, 2012.Google ScholarGoogle Scholar
  42. S. Spinner, G. Casale, F. Brosig, et al. Evaluating approaches to resource demand estimation. Perf. Eval., 92:51--71, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. S. Tsakiltsidis, A. Miranskyy, and E. Mazzawi. On Automatic Detection of Performance Bugs. In ISSREW, pages 132--139. IEEE, 2016. Google ScholarGoogle ScholarCross RefCross Ref
  44. A. van Deursen, P. Klint, and J. Visser. Domain-specific languages. SIGPLAN Not., 35(6):26--36, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. L. Van Gelder, P. Das, H. Janssen, and S. Roels. Comparative study of metamodelling techniques in building energy simulation: Guidelines for practitioners. Simulation Modelling Practice and Theory, 49:245--257, 2014. Google ScholarGoogle ScholarCross RefCross Ref
  46. R. Vuduc, J. W. Demmel, and J. A. Bilmes. Statistical Models for Empirical Search-Based Performance Tuning. IJHPCA, 18(1):65--94, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. J. Walter, A. van Hoorn, H. Koziolek, et al. Asking "What"?, Automating the "How"? - The Vision of Declarative Performance Engineering. ICPE, pages 91--94, 2016.Google ScholarGoogle Scholar
  48. Y. Wang. Automating experimentation with distributed systems using generative techniques. PhD thesis, University of Colorado, 2006.Google ScholarGoogle Scholar
  49. D. Westermann. Deriving Goal-oriented Performance Models by Systematic Experimentation. PhD thesis, KIT Scientific Publishing, 2014.Google ScholarGoogle Scholar
  50. F. Wu, W. Weimer, M. Harman, et al. Deep Parameter Optimisation. In GECCO, pages 1375--1382. ACM, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. S. Yoo and M. Harman. Regression testing minimization, selection and prioritization: a survey. Softw. Test. Verif. Reliab., 22(2):67--120, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. L. Zhu, L. Bass, and G. Champlin-Scharff. DevOps and Its Practices. IEEE Softw., 33(3):32--34, 2016. Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Towards Holistic Continuous Software Performance Assessment

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          ICPE '17 Companion: Proceedings of the 8th ACM/SPEC on International Conference on Performance Engineering Companion
          April 2017
          248 pages
          ISBN:9781450348997
          DOI:10.1145/3053600

          Copyright © 2017 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 18 April 2017

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          ICPE '17 Companion Paper Acceptance Rate24of65submissions,37%Overall Acceptance Rate252of851submissions,30%

          Upcoming Conference

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader