Skip to main content

Experiences with High-Level Programming Directives for Porting Applications to GPUs

  • Chapter

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7174))

Abstract

HPC systems now exploit GPUs within their compute nodes to accelerate program performance. As a result, high-end application development has become extremely complex at the node level. In addition to restructuring the node code to exploit the cores and specialized devices, the programmer may need to choose a programming model such as OpenMP or CPU threads in conjunction with an accelerator programming model to share and manage the different node resources. This comes at a time when programmer productivity and the ability to produce portable code has been recognized as a major concern. In order to offset the high development cost of creating CUDA or OpenCL kernels, directives have been proposed for programming accelerator devices, but their implications are not well known. In this paper, we evaluate the state of the art accelerator directives to program several applications kernels, explore transformations to achieve good performance, and examine the expressivity and performance penalty of using high-level directives versus CUDA. We also compare our results to OpenMP implementations to understand the benefits of running the kernels in the accelerator versus CPU cores.

This work was funded by the Office of Advanced Scientific Computing Research, Office of Science, U.S. Department of Energy under Contract No. DE-AC05-00OR22725 with UT-Battelle, LLC. This research used resources of the Leadership Computing Facility at Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725 with UT-Battelle, LLC. This work was also funded by NSF grant CCF-0917285.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   54.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. International Exascale Software Project Draft Report V 0.93, http://www.exascale.org/iesp/MainPage

  2. Khronos Group, http://www.khronos.org/

  3. OpenCL 1.0 Specification, http://www.khronos.org/opencl/

  4. PGI Fortran & C Accelerator Compilers and Programming Model, http://www.pgroup.com/lit/pgi_whitepaper_accpre.pdf

  5. Rapidmind, http://www.rapidmind.net/

  6. Amarasinghe, S., Gordon, M.I., Karczmarek, M., Lin, J., Maze, D., Rabbah, R.M., Thies, W.: Language and compiler design for streaming applications. Int. J. Parallel Program. 33(2), 261–278 (2005)

    Article  Google Scholar 

  7. Buck, I., Foley, T., Horn, D., Sugerman, J., Fatahalian, K., Houston, M., Hanrahan, P.: Brook for gpus: stream computing on graphics hardware. In: SIGGRAPH 2004: ACM SIGGRAPH 2004 Papers, pp. 777–786. ACM, New York (2004)

    Google Scholar 

  8. Chen, J.H., Choudhary, A., de Supinski, B., DeVries, M., Hawkes, E.R., Klasky, S., Liao, W.K., Ma, K.L., Mellor-Crummey, J., Podhorszki, N., Sankaran, R., Shende, S., Yoo, C.S.: Terascale direct numerical simulations of turbulent combustion using s3d. Computational Science and Discovery 2(1), 15001 (2009)

    Article  Google Scholar 

  9. CAPS Enterprise. HMPP: A Hybrid Multicore Parallel Programming Platform, http://www.caps-entreprise.com/en/documentation/caps_hmpp_product_brief.pdf

  10. Han, T.D., Abdelrahman, T.S.: /hi/cuda: a high-level directive-based language for gpu programming. In: GPGPU-2: Proceedings of 2nd Workshop on General Purpose Processing on Graphics Processing Units, pp. 52–61. ACM, New York (2009)

    Chapter  Google Scholar 

  11. McCool, M., Toit, S.: Metaprogramming GPUs with Sh. A K Peters, Ltd. (2004)

    Google Scholar 

  12. Membarth, R., Hannig, F., Teich, J., Korner, M., Eckert, W.: Frameworks for gpu accelerators: A comprehensive evaluation using 2d/3d image registration. In: 2011 IEEE 9th Symposium on Application Specific Processors (SASP), pp. 78–81 (June 2011)

    Google Scholar 

  13. NVIDIA. CUDA, http://www.nvidia.com/object/cuda_home.html

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Hernandez, O., Ding, W., Chapman, B., Kartsaklis, C., Sankaran, R., Graham, R. (2012). Experiences with High-Level Programming Directives for Porting Applications to GPUs. In: Keller, R., Kramer, D., Weiss, JP. (eds) Facing the Multicore - Challenge II. Lecture Notes in Computer Science, vol 7174. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-30397-5_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-30397-5_9

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-30396-8

  • Online ISBN: 978-3-642-30397-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics