ABSTRACT
Good benchmarks are hard to find because they require a substantial effort to keep them representative for the constantly changing challenges of a particular field. Synthetic benchmarks are a common approach to deal with this, and methods from machine learning are natural candidates for synthetic benchmark generation. In this paper we investigate the usefulness of machine learning in the prominent CLgen benchmark generator. We re-evaluate CLgen by comparing the benchmarks generated by the model with the raw data used to train it. This re-evaluation indicates that, for the use case considered, machine learning did not yield additional benefit over a simpler method using the raw data. We investigate the reasons for this and provide further insights into the challenges the problem could pose for potential future generators.
- M. Allamanis, E. T. Barr, P. Devanbu, and C. Sutton. A survey of machine learning for big code and naturalness. ACM Computing Surveys (CSUR), 51(4):81, 2018. Google ScholarDigital Library
- J. Anantpur and R. Govindarajan. Taming warp divergence. In Proceedings of the 2017 International Symposium on Code Generation and Optimization, CGO ’17, pages 50–60, Piscataway, NJ, USA, 2017. IEEE Press. Google ScholarDigital Library
- A. Chiu, J. Garvey, and T. S. Abdelrahman. A language and preprocessor for user-controlled generation of synthetic programs. Scientific Programming, 2017, 2017.Google Scholar
- C. Cummins, P. Petoumenos, Z. Wang, and H. Leather. End-to-end deep learning of optimization heuristics. In 2017 26th International Conference on Parallel Architectures and Compilation Techniques (PACT), pages 219–232. IEEE, 2017.Google Scholar
- C. Cummins, P. Petoumenos, Z. Wang, and H. Leather. Synthesizing benchmarks for predictive modeling. In Code Generation and Optimization (CGO), 2017 IEEE/ACM International Symposium on, pages 86–99. IEEE, 2017.Google Scholar
- N. Fauzia, L.-N. Pouchet, and P. Sadayappan. Characterizing and enhancing global memory data coalescing on gpus. In Proceedings of the 13th Annual IEEE/ACM International Symposium on Code Generation and Optimization, CGO ’15, pages 12–22, Washington, DC, USA, 2015. IEEE Computer Society. Google ScholarDigital Library
- A. Goens, S. Ertel, J. Adam, and J. Castrillon. Level graphs: Generating benchmarks for concurrency optimizations in compilers. In Proceedings of the 11th International Workshop on Programmability and Architectures for Heterogeneous Multicores (MULTIPROG’2018), colocated with 13th International Conference on High-Performance and Embedded Architectures and Compilers (HiPEAC), Jan. 2018.Google Scholar
- T. D. Han and T. S. Abdelrahman. Use of synthetic benchmarks for machine-learning-based performance auto-tuning. In 2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pages 1350–1361. IEEE, 2017.Google Scholar
- S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. Google ScholarDigital Library
- J. A. Jablin, T. B. Jablin, O. Mutlu, and M. Herlihy. Warp-aware trace scheduling for gpus. In Proceedings of the 23rd International Conference on Parallel Architectures and Compilation, PACT ’14, pages 163–174, New York, NY, USA, 2014. ACM. Google ScholarDigital Library
- F. Ji, H. Lin, and X. Ma. Rsvm: A region-based software virtual memory for gpu. In Proceedings of the 22Nd International Conference on Parallel Architectures and Compilation Techniques, PACT ’13, pages 269–278, Piscataway, NJ, USA, 2013. IEEE Press. Google ScholarDigital Library
- W. Jia, K. A. Shaw, and M. Martonosi. Starchart: Hardware and software optimization using recursive partitioning regression trees. In Proceedings of the 22Nd International Conference on Parallel Architectures and Compilation Techniques, PACT ’13, pages 257–268, Piscataway, NJ, USA, 2013. IEEE Press. Google ScholarDigital Library
- A. Joshi, L. Eeckhout, and L. John. The return of synthetic benchmarks. In 2008 SPEC Benchmark Workshop, pages 1–11, 2008.Google Scholar
- R. Kaleem, R. Barik, T. Shpeisman, B. T. Lewis, C. Hu, and K. Pingali. Adaptive heterogeneous scheduling for integrated gpus. In Proceedings of the 23rd International Conference on Parallel Architectures and Compilation, PACT ’14, pages 151–162, New York, NY, USA, 2014. ACM. Google ScholarDigital Library
- O. Kayiran, A. Jog, M. T. Kandemir, and C. R. Das. Neither more nor less: Optimizing thread-level parallelism for gpgpus. In Proceedings of the 22Nd International Conference on Parallel Architectures and Compilation Techniques, PACT ’13, pages 157–166, Piscataway, NJ, USA, 2013. IEEE Press. Google ScholarDigital Library
- G. Kim, J. Jeong, J. Kim, and M. Stephenson. Automatically exploiting implicit pipeline parallelism from multiple dependent kernels for gpus. In Proceedings of the 2016 International Conference on Parallel Architectures and Compilation, PACT ’16, pages 341–352, New York, NY, USA, 2016. ACM. Google ScholarDigital Library
- H.-S. Kim, I. El Hajj, J. Stratton, S. Lumetta, and W.-M. Hwu. Localitycentric thread scheduling for bulk-synchronous programming models on cpu architectures. In Proceedings of the 13th Annual IEEE/ACM International Symposium on Code Generation and Optimization, CGO ’15, pages 257–268, Washington, DC, USA, 2015. IEEE Computer Society.Google Scholar
- Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. nature, 521(7553):436, 2015.Google ScholarCross Ref
- J. Lee, M. Samadi, and S. Mahlke. Vast: The illusion of a large memory space for gpus. In Proceedings of the 23rd International Conference on Parallel Architectures and Compilation, PACT ’14, pages 443–454, New York, NY, USA, 2014. ACM. Google ScholarDigital Library
- J. Lee, M. Samadi, Y. Park, and S. Mahlke. Transparent cpu-gpu collaboration for data-parallel kernels on heterogeneous systems. In Proceedings of the 22Nd International Conference on Parallel Architectures and Compilation Techniques, PACT ’13, pages 245–256, Piscataway, NJ, USA, 2013. IEEE Press. Google ScholarDigital Library
- S.-Y. Lee and C.-J. Wu. Caws: Criticality-aware warp scheduling for gpgpu workloads. In Proceedings of the 23rd International Conference on Parallel Architectures and Compilation, PACT ’14, pages 175–186, New York, NY, USA, 2014. ACM. Google ScholarDigital Library
- C. Li, Y. Yang, Z. Lin, and H. Zhou. Automatic data placement into gpu on-chip memory resources. In Proceedings of the 13th Annual IEEE/ACM International Symposium on Code Generation and Optimization, CGO ’15, pages 23–33, Washington, DC, USA, 2015. IEEE Computer Society. Google ScholarDigital Library
- A. Magni, C. Dubach, and M. O’Boyle. Automatic optimization of thread-coarsening for graphics processors. In Proceedings of the 23rd International Conference on Parallel Architectures and Compilation, PACT ’14, pages 455–466, New York, NY, USA, 2014. ACM. Google ScholarDigital Library
- C. Margiolas and M. F. P. O’Boyle. Portable and transparent host-device communication optimization for gpgpu environments. In Proceedings of Annual IEEE/ACM International Symposium on Code Generation and Optimization, CGO ’14, pages 55:55–55:65, New York, NY, USA, 2014. ACM. Google ScholarDigital Library
- C. Margiolas and M. F. P. O’Boyle. Portable and transparent software managed scheduling on accelerators for fair resource sharing. In Proceedings of the 2016 International Symposium on Code Generation and Optimization, CGO ’16, pages 82–93, New York, NY, USA, 2016. ACM. Google ScholarDigital Library
- R. McGill, J. W. Tukey, and W. A. Larsen. Variations of box plots. The American Statistician, 32(1):12–16, 1978.Google Scholar
- M. F. O’Boyle, Z. Wang, and D. Grewe. Portable mapping of data parallel programs to opencl for heterogeneous systems. In Proceedings of the 2013 IEEE/ACM International Symposium on Code Generation and Optimization (CGO), pages 1–10. IEEE Computer Society, 2013. Google ScholarDigital Library
- A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners. Technical report, Technical report, OpenAi, 2019.Google Scholar
- B. Wang, B. Wu, D. Li, X. Shen, W. Yu, Y. Jiao, and J. S. Vetter. Exploring hybrid memory for gpu energy efficiency through software-hardware co-design. In Proceedings of the 22Nd International Conference on Parallel Architectures and Compilation Techniques, PACT ’13, pages 93–102, Piscataway, NJ, USA, 2013. IEEE Press. Google ScholarDigital Library
- B. Wang, Y. Zhu, and W. Yu. Oaws: Memory occlusion aware warp scheduling. In Proceedings of the 2016 International Conference on Parallel Architectures and Compilation, PACT ’16, pages 45–55, New York, NY, USA, 2016. ACM. Google ScholarDigital Library
- J. Wu, A. Belevich, E. Bendersky, M. Heffernan, C. Leary, J. Pienaar, B. Roune, R. Springer, X. Weng, and R. Hundt. Gpucc: An open-source gpgpu compiler. In Proceedings of the 2016 International Symposium on Code Generation and Optimization, CGO ’16, pages 105–116, New York, NY, USA, 2016. ACM. Google ScholarDigital Library
- Q. Xu and M. Annavaram. Pats: Pattern aware scheduling and power gating for gpgpus. In Proceedings of the 23rd International Conference on Parallel Architectures and Compilation, PACT ’14, pages 225–236, New York, NY, USA, 2014. ACM. Google ScholarDigital Library
Index Terms
- A case study on machine learning for synthesizing benchmarks
Recommendations
A Study of Machine Learning Inference Benchmarks
ICAIP '20: Proceedings of the 4th International Conference on Advances in Image ProcessingMachine learning (ML) is becoming a powerful tool for a variety of applications where artificial intelligence solutions are required. A ML benchmark is a standard suite to measure, evaluate and compare the performance and efficiency of ML systems. This ...
SPEC HPG benchmarks for high-performance systems
In this paper, we discuss the results and characteristics of the benchmark suites maintained by the Standard Performance Evaluation Corporation's (SPEC) High-Performance Group (HPG). Currently, SPECHPGhas two lines of benchmark suites for measuring ...
Datamime: Generating Representative Benchmarks by Automatically Synthesizing Datasets
MICRO '22: Proceedings of the 55th Annual IEEE/ACM International Symposium on MicroarchitectureBenchmarks that closely match the behavior of production workloads are crucial to design and provision computer systems. However, current approaches fall short: First, open-source benchmarks use public datasets that cause different behavior from ...
Comments