Published August 28, 2014 | Version v1
Dataset Open

Data for: Performance Benchmarking of Application Monitoring Frameworks

Creators

  • 1. Kiel University, Kiel, Germany

Description

Application-level monitoring of continuously operating software systems provides insights into their dynamic behavior, helping to maintain their performance and availability during runtime. Such monitoring may cause a significant runtime overhead to the monitored system, depending on the number and location of used instrumentation probes. In order to improve a system’s instrumentation and to reduce the caused monitoring overhead, it is necessary to know the performance impact of each probe.
While many monitoring frameworks are claiming to have minimal impact on the performance, these claims are often not backed up with a detailed performance evaluation determining the actual cost of monitoring. Benchmarks can be used as an effective and affordable way for these evaluations. However, no benchmark specifically targeting the overhead of monitoring itself exists. Furthermore, no established benchmark engineering methodology exists that provides guidelines for the design, execution, and analysis of benchmarks.
This thesis introduces a benchmark approach to measure the performance overhead of application-level monitoring frameworks. The core contributions of this approach are 1) a definition of common causes of monitoring overhead, 2) a general benchmark engineering methodology, 3) the MooBench micro-benchmark to measure and quantify causes of monitoring overhead, and 4) detailed performance evaluations of three different application-level monitoring frameworks. Extensive experiments demonstrate the feasibility and practicality of the approach and validate the benchmark results. The developed benchmark is available as open source software and the results of all experiments are available for download to facilitate further validation and replication of the results.

This dataset supplements the thesis and contains the results of all experiments, including the raw result data, the results of additional experiments, and the configuration of our benchmarks.

Files

InspectIT.zip

Files (3.6 GB)

Name Size Download all
md5:dd2ba5a86c3af72769fa97ebb51d15f9
632.7 MB Preview Download
md5:605de369fc6daa4da7d580dcf50b3964
111.4 MB Preview Download
md5:1420230a3e1db23a68b8e11043fc6121
659.8 MB Preview Download
md5:984514c49e8c0eb4f3372ea643c773e2
674.5 MB Preview Download
md5:f201c46edd3d81d040f7855038cecea2
1.2 GB Preview Download
md5:2d503105b923704d42c7615c9812ca2d
256.9 MB Preview Download
md5:0764adffd74a082d2daeaa5f0aa6c3f0
31.0 MB Preview Download

Additional details

Related works

Is supplement to
10.5281/zenodo.11515 (DOI)
978-3-7357-7853-6 (ISBN)