Parallel programming models for dense linear algebra on heterogeneous systems

M. Abalenkovs, A. Abdelfattah, J. Dongarra, M. Gates, A. Haidar, J. Kurzak, P. Luszczek, S. Tomov, I. Yamazaki, A. YarKhan

Research output: Contribution to journalArticlepeer-review

15 Scopus citations

Abstract

We present a review of the current best practices in parallel programming models for dense linear algebra (DLA) on heterogeneous architectures. We consider multicore CPUs, stand alone manycore coprocessors, GPUs, and combinations of these. Of interest is the evolution of the programming models for DLA libraries - in particular, the evolution from the popular LAPACK and ScaLAPACK libraries to their modernized counterparts PLASMA (for multicore CPUs) and MAGMA (for heterogeneous architectures), as well as other programming models and libraries. Besides providing insights into the programming techniques of the libraries considered, we outline our view of the current strengths and weaknesses of their programming models - especially in regards to hardware trends and ease of programming high-performance numerical software that current applications need - in order to motivate work and future directions for the next generation of parallel programming models for high-performance linear algebra libraries on heterogeneous systems.

Original languageEnglish
Pages (from-to)67-86
Number of pages20
JournalSupercomputing Frontiers and Innovations
Volume2
Issue number4
DOIs
StatePublished - 2015

Keywords

  • Dense linear algebra
  • GPU
  • HPC
  • Multicore
  • Programming models
  • Runtime

Fingerprint

Dive into the research topics of 'Parallel programming models for dense linear algebra on heterogeneous systems'. Together they form a unique fingerprint.

Cite this