skip to main content
10.1145/3315508.3329976acmconferencesArticle/Chapter ViewAbstractPublication PagespldiConference Proceedingsconference-collections
research-article

A case study on machine learning for synthesizing benchmarks

Published:22 June 2019Publication History

ABSTRACT

Good benchmarks are hard to find because they require a substantial effort to keep them representative for the constantly changing challenges of a particular field. Synthetic benchmarks are a common approach to deal with this, and methods from machine learning are natural candidates for synthetic benchmark generation. In this paper we investigate the usefulness of machine learning in the prominent CLgen benchmark generator. We re-evaluate CLgen by comparing the benchmarks generated by the model with the raw data used to train it. This re-evaluation indicates that, for the use case considered, machine learning did not yield additional benefit over a simpler method using the raw data. We investigate the reasons for this and provide further insights into the challenges the problem could pose for potential future generators.

References

  1. M. Allamanis, E. T. Barr, P. Devanbu, and C. Sutton. A survey of machine learning for big code and naturalness. ACM Computing Surveys (CSUR), 51(4):81, 2018. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. J. Anantpur and R. Govindarajan. Taming warp divergence. In Proceedings of the 2017 International Symposium on Code Generation and Optimization, CGO ’17, pages 50–60, Piscataway, NJ, USA, 2017. IEEE Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. A. Chiu, J. Garvey, and T. S. Abdelrahman. A language and preprocessor for user-controlled generation of synthetic programs. Scientific Programming, 2017, 2017.Google ScholarGoogle Scholar
  4. C. Cummins, P. Petoumenos, Z. Wang, and H. Leather. End-to-end deep learning of optimization heuristics. In 2017 26th International Conference on Parallel Architectures and Compilation Techniques (PACT), pages 219–232. IEEE, 2017.Google ScholarGoogle Scholar
  5. C. Cummins, P. Petoumenos, Z. Wang, and H. Leather. Synthesizing benchmarks for predictive modeling. In Code Generation and Optimization (CGO), 2017 IEEE/ACM International Symposium on, pages 86–99. IEEE, 2017.Google ScholarGoogle Scholar
  6. N. Fauzia, L.-N. Pouchet, and P. Sadayappan. Characterizing and enhancing global memory data coalescing on gpus. In Proceedings of the 13th Annual IEEE/ACM International Symposium on Code Generation and Optimization, CGO ’15, pages 12–22, Washington, DC, USA, 2015. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. A. Goens, S. Ertel, J. Adam, and J. Castrillon. Level graphs: Generating benchmarks for concurrency optimizations in compilers. In Proceedings of the 11th International Workshop on Programmability and Architectures for Heterogeneous Multicores (MULTIPROG’2018), colocated with 13th International Conference on High-Performance and Embedded Architectures and Compilers (HiPEAC), Jan. 2018.Google ScholarGoogle Scholar
  8. T. D. Han and T. S. Abdelrahman. Use of synthetic benchmarks for machine-learning-based performance auto-tuning. In 2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pages 1350–1361. IEEE, 2017.Google ScholarGoogle Scholar
  9. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. J. A. Jablin, T. B. Jablin, O. Mutlu, and M. Herlihy. Warp-aware trace scheduling for gpus. In Proceedings of the 23rd International Conference on Parallel Architectures and Compilation, PACT ’14, pages 163–174, New York, NY, USA, 2014. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. F. Ji, H. Lin, and X. Ma. Rsvm: A region-based software virtual memory for gpu. In Proceedings of the 22Nd International Conference on Parallel Architectures and Compilation Techniques, PACT ’13, pages 269–278, Piscataway, NJ, USA, 2013. IEEE Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. W. Jia, K. A. Shaw, and M. Martonosi. Starchart: Hardware and software optimization using recursive partitioning regression trees. In Proceedings of the 22Nd International Conference on Parallel Architectures and Compilation Techniques, PACT ’13, pages 257–268, Piscataway, NJ, USA, 2013. IEEE Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. A. Joshi, L. Eeckhout, and L. John. The return of synthetic benchmarks. In 2008 SPEC Benchmark Workshop, pages 1–11, 2008.Google ScholarGoogle Scholar
  14. R. Kaleem, R. Barik, T. Shpeisman, B. T. Lewis, C. Hu, and K. Pingali. Adaptive heterogeneous scheduling for integrated gpus. In Proceedings of the 23rd International Conference on Parallel Architectures and Compilation, PACT ’14, pages 151–162, New York, NY, USA, 2014. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. O. Kayiran, A. Jog, M. T. Kandemir, and C. R. Das. Neither more nor less: Optimizing thread-level parallelism for gpgpus. In Proceedings of the 22Nd International Conference on Parallel Architectures and Compilation Techniques, PACT ’13, pages 157–166, Piscataway, NJ, USA, 2013. IEEE Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. G. Kim, J. Jeong, J. Kim, and M. Stephenson. Automatically exploiting implicit pipeline parallelism from multiple dependent kernels for gpus. In Proceedings of the 2016 International Conference on Parallel Architectures and Compilation, PACT ’16, pages 341–352, New York, NY, USA, 2016. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. H.-S. Kim, I. El Hajj, J. Stratton, S. Lumetta, and W.-M. Hwu. Localitycentric thread scheduling for bulk-synchronous programming models on cpu architectures. In Proceedings of the 13th Annual IEEE/ACM International Symposium on Code Generation and Optimization, CGO ’15, pages 257–268, Washington, DC, USA, 2015. IEEE Computer Society.Google ScholarGoogle Scholar
  18. Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. nature, 521(7553):436, 2015.Google ScholarGoogle ScholarCross RefCross Ref
  19. J. Lee, M. Samadi, and S. Mahlke. Vast: The illusion of a large memory space for gpus. In Proceedings of the 23rd International Conference on Parallel Architectures and Compilation, PACT ’14, pages 443–454, New York, NY, USA, 2014. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. J. Lee, M. Samadi, Y. Park, and S. Mahlke. Transparent cpu-gpu collaboration for data-parallel kernels on heterogeneous systems. In Proceedings of the 22Nd International Conference on Parallel Architectures and Compilation Techniques, PACT ’13, pages 245–256, Piscataway, NJ, USA, 2013. IEEE Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. S.-Y. Lee and C.-J. Wu. Caws: Criticality-aware warp scheduling for gpgpu workloads. In Proceedings of the 23rd International Conference on Parallel Architectures and Compilation, PACT ’14, pages 175–186, New York, NY, USA, 2014. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. C. Li, Y. Yang, Z. Lin, and H. Zhou. Automatic data placement into gpu on-chip memory resources. In Proceedings of the 13th Annual IEEE/ACM International Symposium on Code Generation and Optimization, CGO ’15, pages 23–33, Washington, DC, USA, 2015. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. A. Magni, C. Dubach, and M. O’Boyle. Automatic optimization of thread-coarsening for graphics processors. In Proceedings of the 23rd International Conference on Parallel Architectures and Compilation, PACT ’14, pages 455–466, New York, NY, USA, 2014. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. C. Margiolas and M. F. P. O’Boyle. Portable and transparent host-device communication optimization for gpgpu environments. In Proceedings of Annual IEEE/ACM International Symposium on Code Generation and Optimization, CGO ’14, pages 55:55–55:65, New York, NY, USA, 2014. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. C. Margiolas and M. F. P. O’Boyle. Portable and transparent software managed scheduling on accelerators for fair resource sharing. In Proceedings of the 2016 International Symposium on Code Generation and Optimization, CGO ’16, pages 82–93, New York, NY, USA, 2016. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. R. McGill, J. W. Tukey, and W. A. Larsen. Variations of box plots. The American Statistician, 32(1):12–16, 1978.Google ScholarGoogle Scholar
  27. M. F. O’Boyle, Z. Wang, and D. Grewe. Portable mapping of data parallel programs to opencl for heterogeneous systems. In Proceedings of the 2013 IEEE/ACM International Symposium on Code Generation and Optimization (CGO), pages 1–10. IEEE Computer Society, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners. Technical report, Technical report, OpenAi, 2019.Google ScholarGoogle Scholar
  29. B. Wang, B. Wu, D. Li, X. Shen, W. Yu, Y. Jiao, and J. S. Vetter. Exploring hybrid memory for gpu energy efficiency through software-hardware co-design. In Proceedings of the 22Nd International Conference on Parallel Architectures and Compilation Techniques, PACT ’13, pages 93–102, Piscataway, NJ, USA, 2013. IEEE Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. B. Wang, Y. Zhu, and W. Yu. Oaws: Memory occlusion aware warp scheduling. In Proceedings of the 2016 International Conference on Parallel Architectures and Compilation, PACT ’16, pages 45–55, New York, NY, USA, 2016. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. J. Wu, A. Belevich, E. Bendersky, M. Heffernan, C. Leary, J. Pienaar, B. Roune, R. Springer, X. Weng, and R. Hundt. Gpucc: An open-source gpgpu compiler. In Proceedings of the 2016 International Symposium on Code Generation and Optimization, CGO ’16, pages 105–116, New York, NY, USA, 2016. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Q. Xu and M. Annavaram. Pats: Pattern aware scheduling and power gating for gpgpus. In Proceedings of the 23rd International Conference on Parallel Architectures and Compilation, PACT ’14, pages 225–236, New York, NY, USA, 2014. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. A case study on machine learning for synthesizing benchmarks

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        MAPL 2019: Proceedings of the 3rd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages
        June 2019
        46 pages
        ISBN:9781450367196
        DOI:10.1145/3315508

        Copyright © 2019 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 22 June 2019

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Upcoming Conference

        PLDI '24

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader