Abstract
In this work, we evaluate OpenCL as a programming tool for developing performance-portable applications for GPGPU. While the Khronos group developed OpenCL with programming portability in mind, performance is not necessarily portable. OpenCL has required performance-impacting initializations that do not exist in other languages such as CUDA. Understanding these implications allows us to provide a single library with decent performance on a variety of platforms. We choose triangular solver (TRSM) and matrix multiplication (GEMM) as representative level 3 BLAS routines to implement in OpenCL. We profile TRSM to get the time distribution of the OpenCL runtime system. We then provide tuned GEMM kernels for both the NVIDIA Tesla C2050 and ATI Radeon 5870, the latest GPUs offered by both companies. We explore the benefits of using the texture cache, the performance ramifications of copying data into images, discrepancies in the OpenCL and CUDA compilers' optimizations, and other issues that affect the performance. Experimental results show that nearly 50% of peak performance can be obtained in GEMM on both GPUs in OpenCL. We also show that the performance of these kernels is not highly portable. Finally, we propose the use of auto-tuning to better explore these kernels' parameter space using search harness.
Original language | English |
---|---|
Pages (from-to) | 391-407 |
Number of pages | 17 |
Journal | Parallel Computing |
Volume | 38 |
Issue number | 8 |
DOIs | |
State | Published - Aug 2012 |
Externally published | Yes |
Funding
This work was supported by the SCALE-IT fellowship through grant number OR11907-001 . This work was supported by NVIDIA , Microsoft , the US National Science Foundation , and the US Department of Energy .
Keywords
- Auto-tuning
- Hardware accelerators
- Portability