Open MP to GPGPU: A compiler framework for automatic translation and optimization

Seyong Lee, Seung Jai Min, Rudolf Eigenmann

Research output: Contribution to journalArticlepeer-review

144 Scopus citations

Abstract

GPGPUs have recently emerged as powerful vehicles for generalpurpose high-performance computing. Although a new Compute Unified Device Architecture (CUDA) programming model from NVIDIA offers improved programmability for general computing, programming GPGPUs is still complex and error-prone. This paper presents a compiler framework for automatic source-to-source translation of standard OpenMP applications into CUDA-based GPGPU applications. The goal of this translation is to further improve programmability and make existing OpenMP applications amenable to execution on GPGPUs. In this paper, we have identified several key transformation techniques, which enable efficient GPU global memory access, to achieve high performance. Experimental results from two important kernels (JACOBI and SPMUL) and two NAS OpenMP Parallel Benchmarks (EP and CG) show that the described translator and compile-time optimizations work well on both regular and irregular applications, leading to performance improvements of up to 50X over the unoptimized translation (up to 328X over serial on a CPU).

Original languageEnglish
Pages (from-to)101-110
Number of pages10
JournalACM SIGPLAN Notices
Volume44
Issue number4
StatePublished - 2009
Externally publishedYes

Keywords

  • Automatic translation
  • CUDA
  • Compiler optimization
  • GPU
  • Open MP

Fingerprint

Dive into the research topics of 'Open MP to GPGPU: A compiler framework for automatic translation and optimization'. Together they form a unique fingerprint.

Cite this