Tridiagonalization of a dense symmetric matrix on multiple GPUs and its application to symmetric eigenvalue problems

Ichitaro Yamazaki, Tingxing Dong, Raffaele Solcà, Stanimire Tomov, Jack Dongarra, Thomas Schulthess

Research output: Contribution to journalArticlepeer-review

24 Scopus citations

Abstract

For software to fully exploit the computing power of emerging heterogeneous computers, not only must the required computational kernels be optimized for the specific hardware architectures but also an effective scheduling scheme is needed to utilize the available heterogeneous computational units and to hide the communication between them. As a case study, we develop a static scheduling scheme for the tridiagonalization of a symmetric dense matrix on multicore CPUs with multiple graphics processing units (GPUs) on a single compute node.We then parallelize and optimize the Basic Linear Algebra Subroutines (BLAS)-2 symmetric matrix-vector multiplication, and the BLAS-3 low rank symmetric matrix updates on the GPUs.We demonstrate the good scalability of these multi-GPU BLAS kernels and the effectiveness of our scheduling scheme on twelve Intel Xeon processors and three NVIDIA GPUs. We then integrate our hybrid CPU-GPU kernel into computational kernels at higher-levels of software stacks, that is, a shared-memory dense eigensolver and a distributed-memory sparse eigensolver. Our experimental results show that our kernels greatly improve the performance of these higher-level kernels, not only reducing the solution time but also enabling the solution of larger-scale problems. Because such symmetric eigenvalue problems arise in many scientific and engineering simulations, our kernels could potentially lead to new scientific discoveries. Furthermore, these dense linear algebra algorithms present algorithmic characteristics that can be found in other algorithms. Hence, they are not only important computational kernels on their own but also useful testbeds to study the performance of the emerging computers and the effects of the various optimization techniques.

Original languageEnglish
Pages (from-to)2652-2666
Number of pages15
JournalConcurrency and Computation: Practice and Experience
Volume26
Issue number16
DOIs
StatePublished - Nov 1 2014
Externally publishedYes

Funding

FundersFunder number
National Science Foundation#OCI-1032815
National Stroke Foundation#OCI-0910735
National Science Foundation1339822

    Keywords

    • Dense linear algebra
    • GPU accelerators
    • Parallel eigensolver
    • Symmetric matrixvector multiplication
    • Symmetric tridiagonal reduction

    Fingerprint

    Dive into the research topics of 'Tridiagonalization of a dense symmetric matrix on multiple GPUs and its application to symmetric eigenvalue problems'. Together they form a unique fingerprint.

    Cite this