Abstract
A current trend in high-performance computing is to decompose a large linear algebra problem into batches containing thousands of smaller problems, that can be solved independently, before collating the results. To standardize the interface to these routines, the community is developing an extension to the BLAS standard (the batched BLAS), enabling users to perform thousands of small BLAS operations in parallel whilst making efficient use of their hardware. We discuss the benefits and drawbacks of the current batched BLAS proposals and perform a number of experiments, focusing on a general matrix-matrix multiplication (GEMM), to explore their affect on the performance. In particular we analyze the effect of novel data layouts which, for example, interleave the matrices in memory to aid vectorization and prefetching of data. Utilizing these modifications our code outperforms both MKL1 CuBLAS2 by up to 6 times on the self-hosted Intel KNL (codenamed Knights Landing) and Kepler GPU architectures, for large numbers of double precision GEMM operations using matrices of size 2 × 2 to 20 × 20.
Original language | English |
---|---|
Pages (from-to) | 495-504 |
Number of pages | 10 |
Journal | Procedia Computer Science |
Volume | 108 |
DOIs | |
State | Published - 2017 |
Event | International Conference on Computational Science ICCS 2017 - Zurich, Switzerland Duration: Jun 12 2017 → Jun 14 2017 |
Bibliographical note
Publisher Copyright:© 2017 The Authors. Published by Elsevier B.V.
Keywords
- BLAS
- Batched BLAS
- High-performance computing
- Memory management
- Parallel processing
- Scientific computing