Abstract
Understanding the characteristics of data stored in data centers helps computer scientists in identifying the most suitable storage infrastructure to deal with these workloads. For example, knowing the relevance of file formats allows optimizing the relevant formats but also helps in a procurement to define benchmarks that cover these formats. Existing studies that investigate performance improvements and techniques for data reduction such as deduplication and compression operate on a small set of data. Some of those studies claim the selected data is representative and scale their result to the scale of the data center. One hurdle of running novel schemes on the complete data is the vast amount of data stored and, thus, the resources required to analyze the complete data set. Even if this would be feasible, the costs for running many of those experiments must be justified.
This paper investigates stochastic sampling methods to compute and analyze quantities of interest on file numbers but also on the occupied storage space. It will be demonstrated that on our production system, scanning 1 % of files and data volume is sufficient to deduct conclusions. This speeds up the analysis process and reduces costs of such studies significantly. The contributions of this paper are: (1) the systematic investigation of the inherent analysis error when operating only on a subset of data, (2) the demonstration of methods that help future studies to mitigate this error, (3) the illustration of the approach on a study for scientific file types and compression for a data center.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The value is an estimate based on the TCO of the system for 5 years. It is conservative and does not include secondary costs such as jitter introduced to other models by the caused I/O.
- 2.
Obviously, if those 160 projects are not representative, deducing properties for the full data is not valid. Still the introduced analysis and approaches are correct. The number of 10 k files was choosen as it would ensure to scan at most 0.5 % of the files.
- 3.
From the GZIP files, the extension tar.gz is observed on 9 % of files, representing 53 % of GZIP data overall size. Thus most GZIP files are also TAR files.
References
Kotrlik, J., Higgins, C.: Organizational research: determining appropriate sample size in survey research appropriate sample size in survey research. Inf. Technol. Learn. Perform. J. 19(1), 43 (2001)
Newcombe, R.G.: Two-sided confidence intervals for the single proportion: comparison of seven methods. Stat. Med. 17(8), 857–872 (1998)
Lofstead, J., Polte, M., Gibson, G., Klasky, S., Schwan, K., Oldfield, R., Wolf, M., Liu, Q.: Six degrees of scientific data: reading patterns for extreme scale science IO. In: Proceedings of the 20th International Symposium on High Performance Distributed Computing, pp. 49–60. ACM (2011)
Lakshminarasimhan, S., Shah, N., Ethier, S., Ku, S.H., Chang, C.S., Klasky, S., Latham, R., Ross, R., Samatova, N.F.: ISABELA for effective in situ compression of scientific data. Concurrency Comput. Pract. Experience 25(4), 524–540 (2013)
Kunkel, J., Kuhn, M., Ludwig, T.: Exascale storage systems - an analytical study of expenses. Supercomputing Front. Innovations 1(1), 116–134 (2014)
Kuhn, M., Chasapis, K., Dolz, M., Ludwig, T.: Compression By Default - Reducing Total Cost of Ownership of Storage Systems, June 2014
Hübbe, N., Kunkel, J.: Reducing the HPC-datastorage footprint with MAFISC - multidimensional adaptive filtering improved scientific data compression. Comput. Sci. Res. Dev. 28, 231–239 (2013)
Legesse, S.D.: Performance Evaluation of File Systems Compression Features. Master’s thesis, University of Oslo (2014)
Zuck, A., Toledo, S., Sotnikov, D., Harnik, D.: Compression and SSDs: where and how? In: 2nd Workshop on Interactions of NVM/Flash with Operating Systems and Workloads (INFLOW 2014), Broomfield, CO. USENIX Association, October 2014
Jin, K., Miller, E.L.: The effectiveness of deduplication on virtual machine disk images. In: Proceedings of SYSTOR 2009: The Israeli Experimental Systems Conference, 7. ACM (2009)
Meister, D., Kaiser, J., Brinkmann, A., Kuhn, M., Kunkel, J., Cortes, T.: A study on data deduplication in HPC storage systems. In: Proceedings of the ACM/IEEE Conference on High Performance Computing (SC). IEEE Computer Society, November 2012
Schulzweida, U., Kornblueh, L., Quast, R.: CDO Users guide: Climate Data Operators Version 1.6. 1 (2006)
Resnick, S.I.: Heavy-Tail Phenomena: Probabilistic and Statistical Modeling. Springer Science & Business Media, New York (2007)
Tursunalieva, A., Silvapulle, P.: Estimation of Confidence Intervals for the Mean of Heavy Tailed Loss Distributions: A Comparative Study Using a Simulation Method (2009)
Acknowledgements
I thank Charlotte Jentzsch for the fruitful discussions.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Kunkel, J.M. (2016). Analyzing Data Properties Using Statistical Sampling Techniques – Illustrated on Scientific File Formats and Compression Features. In: Taufer, M., Mohr, B., Kunkel, J. (eds) High Performance Computing. ISC High Performance 2016. Lecture Notes in Computer Science(), vol 9945. Springer, Cham. https://doi.org/10.1007/978-3-319-46079-6_10
Download citation
DOI: https://doi.org/10.1007/978-3-319-46079-6_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-46078-9
Online ISBN: 978-3-319-46079-6
eBook Packages: Computer ScienceComputer Science (R0)