skip to main content
research-article

SARDE: A Framework for Continuous and Self-Adaptive Resource Demand Estimation

Published:09 June 2021Publication History
Skip Abstract Section

Abstract

Resource demands are crucial parameters for modeling and predicting the performance of software systems. Currently, resource demand estimators are usually executed once for system analysis. However, the monitored system, as well as the resource demand itself, are subject to constant change in runtime environments. These changes additionally impact the applicability, the required parametrization as well as the resulting accuracy of individual estimation approaches. Over time, this leads to invalid or outdated estimates, which in turn negatively influence the decision-making of adaptive systems. In this article, we present SARDE, a framework for self-adaptive resource demand estimation in continuous environments. SARDE dynamically and continuously tunes, selects, and executes an ensemble of resource demand estimation approaches to adapt to changes in the environment. This creates an autonomous and unsupervised ensemble estimation technique, providing reliable resource demand estimations in dynamic environments. We evaluate SARDE using two realistic datasets. One set of different micro-benchmarks reflecting different possible system states and one dataset consisting of a continuously running application in a changing environment. Our results show that by continuously applying online optimization, selection and estimation, SARDE is able to efficiently adapt to the online trace and reduce the model error using the resulting ensemble technique.

References

  1. Warren Armstrong, Peter Christen, Eric McCreath, and Alistair P. Rendell. 2006. Dynamic algorithm selection using reinforcement learning. In Proceedings of the 2006 International Workshop on Integrating AI and Data Mining. IEEE, 18–25.Google ScholarGoogle Scholar
  2. André Bauer, Johannes Grohmann, Nikolas Herbst, and Samuel Kounev. 2018. On the value of service demand estimation for auto-scaling. In Proceedings of the 19th International GI/ITG Conference on Measurement, Modelling and Evaluation of Computing Systems (MMB’18). Springer.Google ScholarGoogle ScholarCross RefCross Ref
  3. A. Biedenkapp, J. Marben, M. Lindauer, and F. Hutter. 2018. CAVE: configuration assessment, visualization and evaluation. In Proceedings of the International Conference on Learning and Intelligent Optimization (LION’18).Google ScholarGoogle Scholar
  4. Bernd Bischl, Pascal Kerschke, Lars Kotthoff, Marius Lindauer, Yuri Malitsky, Alexandre Fréchette, Holger Hoos, Frank Hutter, Kevin Leyton-Brown, Kevin Tierney, and Joaquin Vanschoren. 2016. ASlib: A benchmark library for algorithm selection. Artif. Intell. 237 (2016), 41–58. https://doi.org/10.1016/j.artint.2016.04.003Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Gunter Bolch, Stefan Greiner, Hermann de Meer, and Kishor S. Trivedi. 1998. Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications. Wiley-Interscience, New York.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Leo Breiman. 2001. Random forests. Mach. Learn. 45, 1 (2001), 5–32.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. L. Breiman, J. Friedman, R. Olshen, and C. Stone. 1984. Classification and Regression Trees. Wadsworth and Brooks, Monterey, CA.Google ScholarGoogle Scholar
  8. Fabian Brosig, Samuel Kounev, and Klaus Krogmann. 2009. Automated extraction of palladio component models from running enterprise java applications. In Proceedings of the EAI International Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS’09). Article 10, 10 pages.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Edmund K. Burke, Michel Gendreau, Matthew Hyde, Graham Kendall, Gabriela Ochoa, Ender Ozcan, and Rong Qu. 2013. Hyper-heuristics: A survey of the state of the art. J. Operat. Res. Soc. 64, 12 (2013), 1695–1724. https://doi.org/10.1057/jors.2013.71Google ScholarGoogle ScholarCross RefCross Ref
  10. Radu Calinescu, Carlo Ghezzi, Marta Kwiatkowska, and Raffaela Mirandola. 2012. Self-adaptive software needs quantitative verification at runtime. Commun. ACM 55, 9 (Sept. 2012), 69–77. https://doi.org/10.1145/2330667.2330686Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Valeria Cardellini, Emiliano Casalicchio, Vincenzo Grassi, Francesco Lo Presti, and Raffaela Mirandola. 2009. Qos-driven runtime adaptation of service oriented architectures. In Proceedings of the the 7th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on The Foundations of Software Engineering (ESEC/FSE’09). ACM, New York, NY, 131–140. https://doi.org/10.1145/1595696.1595718Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Giuliano Casale, Paolo Cremonesi, and Roberto Turrin. 2007. How to select significant workloads in performance models. In Proceedings of the Computer Measurement Group (CMG’07). 58–108.Google ScholarGoogle Scholar
  13. Giuliano Casale, Paolo Cremonesi, and Roberto Turrin. 2008. Robust workload estimation in queueing network performance models. In Proceedings of the 16th Euromicro Conference on Parallel, Distributed and Network-Based Processing (PDP’08). IEEE Computer Society, Los Alamitos, CA, 183–187. https://doi.org/10.1109/PDP.2008.80Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Corinna Cortes and Vladimir Vapnik. 1995. Support-vector networks. Mach. Learn. 20, 3 (1995), 273–297.Google ScholarGoogle ScholarCross RefCross Ref
  15. David R. Cox. 1958. The regression analysis of binary sequences. J. Roy. Stat. Soc.: Ser. B (Methodol.) 20, 2 (1958), 215–232.Google ScholarGoogle ScholarCross RefCross Ref
  16. David Roxbee Cox. 1966. The statistical analysis of series of events. Monogr. Appl. Probab. Stat. (1966).Google ScholarGoogle Scholar
  17. M. D’Angelo, S. Gerasimou, S. Ghahremani, J. Grohmann, I. Nunes, E. Pournaras, and S. Tomforde. 2019. On learning in collective self-adaptive systems: state of practice and a 3D framework. In Proceedings of the 14th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS’19). IEEE Press, 13–24.Google ScholarGoogle Scholar
  18. Mirko D’Angelo, Sona Ghahremani, Simos Gerasimou, Johannes Grohmann, Ingrid Nunes, Sven Tomforde, and Evangelos Pournaras. 2020. Learning to learn in collective adaptive systems: Mining design pattern for data-driven reasoning. In Proceedings of the 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C’20). IEEE, 121–126.Google ScholarGoogle ScholarCross RefCross Ref
  19. Hans Degroote, Bernd Bischl, Lars Kotthoff, and Patrick De Causmaecker. 2016. Reinforcement learning for automatic online algorithm selection-an empirical study. In Proceedings of the ITAT. 93–101.Google ScholarGoogle Scholar
  20. Ahmed Elkhodary, Naeem Esfahani, and Sam Malek. 2010. FUSION: A framework for engineering self-tuning self-adaptive software systems. In Proceedings of the 18th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE’10). ACM, New York, NY, 7–16. https://doi.org/10.1145/1882291.1882296Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. 2019. Neural architecture search: A survey. J. Mach. Learn. Res. 20, 55 (2019), 1–21. http://jmlr.org/papers/v20/18-598.html.Google ScholarGoogle Scholar
  22. John M. Ewing and Daniel A. Menascé. 2014. A meta-controller method for improving run-time self-architecting in SOA systems. In Proceedings of the 5th ACM/SPEC International Conference on Performance Engineering (ICPE’14). Association for Computing Machinery, New York, NY, USA, 173–184. https://doi.org/10.1145/2568088.2568098Google ScholarGoogle Scholar
  23. E. M. Fredericks, I. Gerostathopoulos, C. Krupitzer, and T. Vogel. 2019. Planning as optimization: dynamically discovering optimal configurations for runtime situations. In Proceedings of the 2019 IEEE 13th International Conference on Self-Adaptive and Self-Organizing Systems (SASO’19). 1–10. https://doi.org/10.1109/SASO.2019.00010Google ScholarGoogle ScholarCross RefCross Ref
  24. Erik M. Fredericks, Christian Krupitzer, Ilias Gerostathopoulos, and Thomas Vogel. 2019. Planning as optimization: Online learning of situations and optimal configurations. In Proceedings of the 2019 IEEE 13th International Conference on Self-Adaptive and Self-Organizing Systems (SASO’19). https://doi.org/10.5281/zenodo.2584266Google ScholarGoogle ScholarCross RefCross Ref
  25. Matteo Gagliolo and Jürgen Schmidhuber. 2010. Algorithm selection as a bandit problem with unbounded losses. In Proceedings of the International Conference on Learning and Intelligent Optimization. Springer, 82–96.Google ScholarGoogle ScholarCross RefCross Ref
  26. Martin Gebser, Roland Kaminski, Benjamin Kaufmann, Torsten Schaub, Marius Thomas Schneider, and Stefan Ziller. 2011. A portfolio solver for answer set programming: Preliminary report. In Proceedings of the International Conference on Logic Programming and Nonmonotonic Reasoning. Springer, 352–357.Google ScholarGoogle ScholarCross RefCross Ref
  27. Johannes Grohmann, Simon Eismann, Andre Bauer, Marwin Zuefle, Nikolas Herbst, and Samuel Kounev. 2019. Utilizing clustering to optimize resource demand estimation approaches. In Proceedings of the 2019 IEEE 4th International Workshops on Foundations and Applications of Self* Systems (FAS*W’19). 134–139.Google ScholarGoogle ScholarCross RefCross Ref
  28. Johannes Grohmann, Simon Eismann, and Samuel Kounev. 2018. The vision of self-aware performance models. In Proceedings of the 2018 IEEE International Conference on Software Architecture Companion (ICSA-C’18). 60–63. https://doi.org/10.1109/ICSA-C.2018.00024Google ScholarGoogle ScholarCross RefCross Ref
  29. Johannes Grohmann, Nikolas Herbst, Simon Spinner, and Samuel Kounev. 2017. Self-tuning resource demand estimation. In Proceedings of the 14th IEEE International Conference on Autonomic Computing (ICAC’17). https://doi.org/10.1109/ICAC.2017.19Google ScholarGoogle ScholarCross RefCross Ref
  30. Johannes Grohmann, Nikolas Herbst, Simon Spinner, and Samuel Kounev. 2018. Using machine learning for recommending service demand estimation approaches. In Proceedings of the 8th International Conference on Cloud Computing and Services Science (CLOSER’18). INSTICC, SciTePress, 473–480. https://doi.org/10.5220/0006761104730480Google ScholarGoogle Scholar
  31. Johannes Grohmann, Daniel Seybold, Simon Eismann, Mark Leznik, Samuel Kounev, and Jörg Domaschka. 2020. Baloo: Measuring and modeling the performance configurations of distributed DBMS. In Proceedings of the 2020 IEEE 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS’20).Google ScholarGoogle ScholarCross RefCross Ref
  32. Jianmei Guo, Dingyu Yang, Norbert Siegmund, Sven Apel, Atrisha Sarkar, Pavel Valov, Krzysztof Czarnecki, Andrzej Wasowski, and Huiqun Yu. 2018. Data-efficient performance learning for configurable systems. Emp. Softw. Eng. 23, 3 (2018), 1826–1867.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Huong Ha and Hongyu Zhang. 2019. DeepPerf: Performance prediction for configurable software with deep sparse neural network. In Proceedings of the IEEE/ACM 41st International Conference on Software Engineering. 1095–1106.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Trevor Hastie, Saharon Rosset, Ji Zhu, and Hui Zou. 2009. Multi-class adaboost. Stat. Interface 2, 3 (2009), 349–360.Google ScholarGoogle ScholarCross RefCross Ref
  35. Malte Helmert, Gabriele Röger, and Erez Karpas. 2011. Fast downward stone soup: A baseline for building planner portfolios. In Proceedings of the ICAPS 2011 Workshop on Planning and Learning. Citeseer, 28–35.Google ScholarGoogle Scholar
  36. Nikolaus Huber, Fabian Brosig, Simon Spinner, Samuel Kounev, and Manuel Bähr. 2017. Model-based self-aware performance and resource management using the descartes modeling language. IEEE Trans. Softw. Eng. 43, 5 (2017), 432–452.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. 2011. Sequential Model-Based optimization for general algorithm configuration. In Learning and Intelligent Optimization, Carlos A. Coello Coello (Ed.). Springer, Berlin, 507–523.Google ScholarGoogle Scholar
  38. Frank Hutter, Manuel López-Ibáñez, Chris Fawcett, Marius Lindauer, Holger H. Hoos, Kevin Leyton-Brown, and Thomas Stützle. 2014. AClib: A benchmark library for algorithm configuration. In Proceedings of the 8th International Conference on Learning and Intelligent Optimization, Panos M. Pardalos, Mauricio G. C. Resende, Chrysafis Vogiatzis, and Jose L. Walteros (Eds.), Lecture Notes in Computer Science,Vol. 8426. Springer, 36–40. https://doi.org/10.1007/978-3-319-09584-4_4Google ScholarGoogle Scholar
  39. Frank Hutter, Lin Xu, Holger H. Hoos, and Kevin Leyton-Brown. 2014. Algorithm runtime prediction: Methods and evaluation. Artif. Intell. 206 (2014), 79–111. https://doi.org/10.1016/j.artint.2013.10.003Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. D. N. Joanes and C. A. Gill. 1998. Comparing measures of sample skewness and kurtosis. J. Roy. Stat. Soc.: Ser. D 47, 1 (1998), 183–189. https://doi.org/10.1111/1467-9884.00122Google ScholarGoogle ScholarCross RefCross Ref
  41. Jeffrey O. Kephart and David M. Chess. 2003. The vision of autonomic computing. Computer 36, 1 (Jan. 2003), 41–50. https://doi.org/10.1109/MC.2003.1160055Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Pascal Kerschke, Holger H. Hoos, Frank Neumann, and Heike Trautmann. 2019. Automated algorithm selection: Survey and perspectives. Evol. Comput. 27, 1 (2019), 3–45.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Lars Kotthoff, Pascal Kerschke, Holger Hoos, and Heike Trautmann. 2015. Improving the state of the Art in inexact TSP solving using per-instance algorithm selection. In Learning and Intelligent Optimization, Clarisse Dhaenens, Laetitia Jourdan, and Marie-Eléonore Marmion (Eds.). Springer International Publishing, Cham, 202–217.Google ScholarGoogle Scholar
  44. Olga Kouchnarenko and Jean-François Weber. 2014. Adapting component-based systems at runtime via policies with temporal patterns. In Formal Aspects of Component Software, José Luiz Fiadeiro, Zhiming Liu, and Jinyun Xue (Eds.). Springer International Publishing, Cham, 234–253.Google ScholarGoogle Scholar
  45. Samuel Kounev, Peter Lewis, Kirstie Bellman, Nelly Bencomo, Javier Camara, Ada Diaconescu, Lukas Esterle, Kurt Geihs, Holger Giese, Sebastian Götz, Paola Inverardi, Jeffrey Kephart, and Andrea Zisman. 2017. The notion of self-aware computing. In Self-Aware Computing Systems, Samuel Kounev, Jeffrey O. Kephart, Aleksandar Milenkoski, and Xiaoyun Zhu (Eds.). Springer-Verlag, Berlin.Google ScholarGoogle Scholar
  46. Stephan Kraft, Sergio Pacheco-Sanchez, Giuliano Casale, and Stephen Dawson. 2009. Estimating service resource consumption from response time measurements. In Proceedings of the EAI International Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS’09). 1–10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Dinesh Kumar, Asser Tantawi, and Li Zhang. 2009. Real-time performance modeling for adaptive software systems. In Proceedings of the EAI International Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS’09). 1–10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Edward D. Lazowska, John Zahorjan, G. Scott Graham, and Kenneth C. Sevcik. 1984. Quantitative System Performance: Computer System Analysis Using Queueing Network Models. Prentice-Hall, Inc., Upper Saddle River, NJ.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Peter Lewis, Kirstie L. Bellman, Christopher Landauer, Lukas Esterle, Kyrre Glette, Ada Diaconescu, and Holger Giese. 2017. Towards a framework for the levels and aspects of self-aware computing systems. In Self-Aware Computing Systems. Springer, 51–85.Google ScholarGoogle Scholar
  50. Haifeng Li. 2014. Smile. Retrieved from https://haifengl.github.io.Google ScholarGoogle Scholar
  51. Jim (Zhanwen) Li, John Chinneck, Murray Woodside, and Marin Litoiu. 2009. Fast scalable optimization to configure service systems having cost and quality of service constraints. In Proceedings of the 6th International Conference on Autonomic Computing (ICAC’09). ACM, 159–168. https://doi.org/10.1145/1555228.1555268Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Marius Lindauer, Holger H. Hoos, Frank Hutter, and Torsten Schaub. 2015. Autofolio: An automatically configured algorithm selector. J. Artif. Intell. Res. 53 (2015), 745–778.Google ScholarGoogle ScholarCross RefCross Ref
  53. Zhen Liu, Laura Wynter, Cathy H. Xia, and Fan Zhang. 2006. Parameter inference of queueing models for IT systems using end-to-end measurements. Perform. Eval. 63, 1 (2006), 36–60.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Yuri Malitsky. 2014. Evolving instance-specific algorithm configuration. In Instance-Specific Algorithm Configuration. Springer, 93–105.Google ScholarGoogle Scholar
  55. Manar Mazkatli and Anne Koziolek. 2018. Continuous integration of performance model. In Companion of the 2018 ACM/SPEC International Conference on Performance Engineering (ICPE’18), Katinka Wolter, William J. Knottenbelt, André van Hoorn, and Manoj Nambiar (Eds.). ACM, 153–158. https://doi.org/10.1145/3185768.3186285Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Manar Mazkatli, David Monschein, Johannes Grohmann, and Anne Koziolek. 2020. Incremental calibration of architectural performance models with parametric dependencies. In Proceedings of the 2020 IEEE International Conference on Software Architecture (ICSA’20). IEEE, 23–34.Google ScholarGoogle ScholarCross RefCross Ref
  57. Daniel A. Menascé. 2008. Computing missing service demand parameters for performance models. In Proceedings of the Computer Measurement Group Conference (CMG’08). 241–248.Google ScholarGoogle Scholar
  58. Daniel A. Menasce and Virgilio Almeida. 2001. Capacity Planning for Web Services: Metrics, Models, and Methods (1st ed.). Prentice Hall PTR, Upper Saddle River, NJ.Google ScholarGoogle Scholar
  59. Daniel A. Menascé, Lawrence W. Dowdy, and Virgilio A. F. Almeida. 2004. Performance by Design: Computer Capacity Planning By Example. Prentice Hall PTR, Upper Saddle River, NJ.Google ScholarGoogle Scholar
  60. Daniel A. Menasce and A. F. Almeida Virgilio. 2000. Scaling for E Business: Technologies, Models, Performance, and Capacity Planning (1st ed.). PTR.Google ScholarGoogle Scholar
  61. G. A. Moreno, O. Strichman, S. Chaki, and R. Vaisman. 2017. Decision-Making with cross-entropy for self-adaptation. In Proceedings of the 2017 IEEE/ACM 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS’17). 90–101. https://doi.org/10.1109/SEAMS.2017.7Google ScholarGoogle Scholar
  62. Qais Noorshams. 2015. Modeling and Prediction of I/O Performance in Virtualized Environments. Ph.D. Dissertation. Karlsruhe Institute of Technology (KIT).Google ScholarGoogle Scholar
  63. Qais Noorshams, Dominik Bruhn, Samuel Kounev, and Ralf Reussner. 2013. Predictive performance modeling of virtualized storage systems using optimized statistical regression techniques. In Proceedings of the ACM/SPEC International Conference on Performance Engineering (ICPE’13). ACM, New York, NY, 283–294.Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Giovanni Pacifici, Wolfgang Segmuller, Mike Spreitzer, and Asser Tantawi. 2008. CPU demand for web serving: Measurement analysis and dynamic estimation. Perform. Eval. 65, 6-7 (2008), 531–553.Google ScholarGoogle ScholarCross RefCross Ref
  65. Juan F. Pérez, Giuliano Casale, and Sergio Pacheco-Sanchez. 2015. Estimating computational requirements in multi-threaded applications. IEEE Trans. Software Eng. 41, 3 (2015), 264–278. https://doi.org/10.1109/TSE.2014.2363472Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. P. Pilgerstorfer and E. Pournaras. 2017. Self-adaptive learning in decentralized combinatorial optimization - a design paradigm for sharing economies. In Proceedings of the 2017 IEEE/ACM 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS’17). 54–64. https://doi.org/10.1109/SEAMS.2017.8Google ScholarGoogle Scholar
  67. Barry Porter, Matthew Grieves, Roberto Rodrigues Filho, and David Leslie. 2016. REX: A development platform and online learning approach for runtime emergent software systems. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI’16). 333–348.Google ScholarGoogle Scholar
  68. Luca Pulina and Armando Tacchella. 2009. A self-adaptive multi-engine solver for quantified Boolean formulas. Constraints 14, 1 (2009), 80–116.Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Ralf H. Reussner, Steffen Becker, Jens Happe, Robert Heinrich, Anne Koziolek, Heiko Koziolek, Max Kramer, and Klaus Krogmann. 2016. Modeling and Simulating Software Architectures: The Palladio Approach. MIT Press.Google ScholarGoogle Scholar
  70. John R. Rice. 1976. The algorithm selection problem. In Advances in Computers, Vol. 15. Elsevier, 65–118. https://doi.org/10.1016/S0065-2458(08)60520-3Google ScholarGoogle Scholar
  71. Roberto Rodrigues Filho and Barry Francis Porter. 2017. Defining emergent software using continuous self-assembly, perception and learning. ACM Trans. Auton. Adapt. Syst. 12, 3 (Sept. 2017). https://doi.org/10.1145/3092691Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Jerome Rolia and Vidar Vetland. 1995. Parameter estimation for performance models of distributed application systems. In Proceedings of the Annual International Conference on Computer Science and Software Engineering (CASCON’95). IBM Press, 54.Google ScholarGoogle Scholar
  73. Jerome Rolia and Vidar Vetland. 1998. Correlating resource demand information with ARM data for application services. In Proceedings of the 1st International Workshop on Software and Performance. ACM, 219–230.Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Abhishek B. Sharma, Ranjita Bhagwan, Monojit Choudhury, Leana Golubchik, Ramesh Govindan, and Geoffrey M. Voelker. 2008. Automatic request categorization in internet services. SIGMETRICS Perform. Eval. Rev. 36, 2 (Aug. 2008), 16–25.Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Stepan Shevtsov and Danny Weyns. 2016. Keep It SIMPLEX: Satisfying multiple goals with guarantees in control-based self-adaptive systems. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE’16). ACM, 229–241.Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. Norbert Siegmund, Alexander Grebhahn, Sven Apel, and Christian Kästner. 2015. Performance-influence models for highly configurable systems. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering. 284–294.Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Kevin Sim, Emma Hart, and Ben Paechter. 2015. A lifelong learning hyper-heuristic method for bin packing. Evol. Comput. 23, 1 (Mar. 2015), 37–67. https://doi.org/10.1162/EVCO_a_00121Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. Kate A Smith-Miles. 2009. Cross-disciplinary perspectives on meta-learning for algorithm selection. ACM Comput. Surv. 41, 1 (2009), 1–25.Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. Simon Spinner, Giuliano Casale, Fabian Brosig, and Samuel Kounev. 2015. Evaluating Approaches to resource demand estimation. Perform. Eval. 92 (Oct. 2015), 51–71. https://doi.org/10.1016/j.peva.2015.07.005Google ScholarGoogle Scholar
  80. Simon Spinner, Giuliano Casale, Xiaoyun Zhu, and Samuel Kounev. 2014. LibReDE: A library for resource demand estimation. In Proceedings of the 5th ACM/SPEC International Conference on Performance Engineering (ICPE’14). ACM Press, New York, NY, 227–228. https://doi.org/10.1145/2568088.2576093Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. Simon Spinner, Giuliano Casale, Xiaoyun Zhu, and Samuel Kounev. 2014. LibReDE: A Library for resource demand estimation. In Proceedings of the ACM/SPEC International Conference on Performance Engineering (ICPE’14). ACM, New York, NY, 227–228.Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. Simon Spinner, Johannes Grohmann, Simon Eismann, and Samuel Kounev. 2019. Online model learning for self-aware computing infrastructures. J. Syst. Softw. 147 (2019), 1–16.Google ScholarGoogle ScholarCross RefCross Ref
  83. Christopher Stewart, Terence Kelly, and Alex Zhang. 2007. Exploiting nonstationarity for performance prediction. SIGOPS Operat. Syst. Rev. 41 (Mar. 2007), 31–44. Issue 3.Google ScholarGoogle Scholar
  84. Jan N. van Rijn, Geoffrey Holmes, Bernhard Pfahringer, and Joaquin Vanschoren. 2014. Algorithm selection on data streams. In Discovery Science, Sašo Džeroski, Panče Panov, Dragi Kocev, and Ljupčo Todorovski (Eds.). Springer International Publishing, Cham, 325–336.Google ScholarGoogle Scholar
  85. Jan N van Rijn, Geoffrey Holmes, Bernhard Pfahringer, and Joaquin Vanschoren. 2018. The online performance estimation framework: heterogeneous ensemble learning for data streams. Mach. Learn. 107, 1 (2018), 149–176.Google ScholarGoogle ScholarDigital LibraryDigital Library
  86. Sonya Voneva, Manar Mazkatli, Johannes Grohmann, and Anne Koziolek. 2020. Optimizing parametric dependencies for incremental performance model extraction. In Companion of the 14th European Conference Software Architecture (ECSA’20), Vol. 1269. Springer, 228–240.Google ScholarGoogle ScholarCross RefCross Ref
  87. Jürgen Walter, Christian Stier, Heiko Koziolek, and Samuel Kounev. 2017. An Expandable Extraction framework for architectural performance models. In Proceedings of the 3rd International Workshop on Quality-Aware DevOps (QUDOS’17). ACM, 6.Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. Wei Wang, Xiang Huang, Xiulei Qin, Wenbo Zhang, Jun Wei, and Hua Zhong. 2012. Application-level CPU consumption estimation: Towards performance isolation of multi-tenancy web applications. In Proceedings of the IEEE International Conference on Cloud Computing (CLOUD’12). 439–446.Google ScholarGoogle ScholarDigital LibraryDigital Library
  89. Wei Wang, Xiang Huang, Yunkui Song, Wenbo Zhang, Jun Wei, Hua Zhong, and Tao Huang. 2011. A statistical approach for estimating CPU consumption in shared Java middleware server. In Proceedings of the IEEE Computer Society Signature Conference on Computers, Software and Application (COMPSAC’11). IEEE, 541–546.Google ScholarGoogle ScholarDigital LibraryDigital Library
  90. Weikun Wang, Juan F. Pérez, and Giuliano Casale. 2015. Filling the Gap: A tool to automate parameter estimation for software performance models. In Proceedings of the 1st International Workshop on Quality-Aware DevOps (QUDOS’15). Association for Computing Machinery, New York, NY, 31–32. https://doi.org/10.1145/2804371.2804379Google ScholarGoogle ScholarDigital LibraryDigital Library
  91. Peter H. Westfall. 2014. Kurtosis as peakedness, 1905–2014. RIP. Am. Stat. 68, 3 (2014), 191–195.Google ScholarGoogle ScholarCross RefCross Ref
  92. Felix Willnecker, Markus Dlugi, Andreas Brunnert, Simon Spinner, Samuel Kounev, and Helmut Krcmar. 2015. Comparing the Accuracy of resource demand measurement and estimation techniques. In Proceedings of the European Performance Engineering Workshop (EPEW’15), Marta Beltrán, William Knottenbelt, and Jeremy Bradley (Eds.), Lecture Notes in Computer Science,Vol. 9272. Springer, 115–129.Google ScholarGoogle ScholarCross RefCross Ref
  93. David H. Wolpert. 1996. The lack of a priori distinctions between learning algorithms. Neural Comput. 8, 7 (1996), 1341–1390.Google ScholarGoogle ScholarDigital LibraryDigital Library
  94. David H. Wolpert and William G. Macready. 1997. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1, 1 (1997), 67–82.Google ScholarGoogle ScholarDigital LibraryDigital Library
  95. Lin Xu, Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. 2008. SATzilla: portfolio-based algorithm selection for SAT. J. Artif. Intell. Res. 32 (2008), 565–606.Google ScholarGoogle ScholarCross RefCross Ref
  96. Qi Zhang, Ludmila Cherkasova, and Evgenia Smirni. 2007. A regression-based analytic model for dynamic resource provisioning of multi-tier applications. In Proceedings of the 4th International Conference on Autonomic Computing (ICAC’07). 27–27.Google ScholarGoogle ScholarDigital LibraryDigital Library
  97. Yi Zhang, Jianmei Guo, Eric Blais, and Krzysztof Czarnecki. 2015. Performance prediction of configurable software systems by fourier learning (t). In Proceedings of the 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE’15). 365–373.Google ScholarGoogle ScholarDigital LibraryDigital Library
  98. Tao Zheng, C. M. Woodside, and M. Litoiu. 2008. Performance model estimation and tracking using optimal filters. IEEE Trans. Softw. Eng. 34, 3 (May 2008), 391–406.Google ScholarGoogle Scholar
  99. Tao Zheng, Jinmei Yang, Murray Woodside, Marin Litoiu, and Gabriel Iszlai. 2005. Tracking time-varying parameters in software systems with extended Kalman filters. In Proceedings of the Annual International Conference on Computer Science and Software Engineering (CASCON’05). IBM Press, 334–345.Google ScholarGoogle Scholar

Index Terms

  1. SARDE: A Framework for Continuous and Self-Adaptive Resource Demand Estimation

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Autonomous and Adaptive Systems
          ACM Transactions on Autonomous and Adaptive Systems  Volume 15, Issue 2
          June 2020
          91 pages
          ISSN:1556-4665
          EISSN:1556-4703
          DOI:10.1145/3461693
          Issue’s Table of Contents

          Copyright © 2021 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 9 June 2021
          • Accepted: 1 April 2021
          • Revised: 1 February 2021
          • Received: 1 October 2020
          Published in taas Volume 15, Issue 2

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format