Factorization and Inversion of a Million Matrices using GPUs: Challenges and Countermeasures

Ahmad Abdelfattah, Azzam Haidar, Stanimire Tomov, Jack Dongarra

Research output: Contribution to journalConference articlepeer-review

16 Scopus citations

Abstract

This paper presents new algorithmic approaches and optimization techniques for LU factorization and matrix inversion of millions of very small matrices using GPUs. These problems appear in many scientific applications including astrophysics and generation of block-Jacobi preconditioners. We show that, for very small problem sizes, design and optimization of GPU kernels require a mindset different from the one usually used when designing LAPACK algorithms for GPUs. Techniques for optimal memory traffic, register blocking, and tunable concurrency are incorporated in our proposed design. We also take advantage of the small matrix sizes to eliminate the intermediate row interchanges in both the factorization and inversion kernels. The proposed GPU kernels achieve performance speedups vs. CUBLAS of up to 6× for the factorization, and 14× for the inversion, using double precision arithmetic on a Pascal P100 GPU.

Original languageEnglish
Pages (from-to)606-615
Number of pages10
JournalProcedia Computer Science
Volume108
DOIs
StatePublished - 2017
EventInternational Conference on Computational Science ICCS 2017 - Zurich, Switzerland
Duration: Jun 12 2017Jun 14 2017

Fingerprint

Dive into the research topics of 'Factorization and Inversion of a Million Matrices using GPUs: Challenges and Countermeasures'. Together they form a unique fingerprint.

Cite this