Abstract
Solving a large number of relatively small linear systems has recently drawn more attention in the HPC community, due to the importance of such computational workloads in many scienti c applications, including sparse multifrontal solvers. Modern hardware accelerators and their architecture require a set of optimization techniques that are very di erent from the ones used in solving one relatively large matrix. In order to impose concurrency on such throughput-oriented architectures, a common practice is to batch the solution of these matrices as one task o oaded to the underlying hardware, rather than solving them individually. This paper presents a high performance batched Cholesky factorization on large sets of relatively small matrices using Graphics Processing Units (GPUs), and addresses both xed and variable size batched problems. We investigate various algorithm designs and optimization techniques, and show that it is essential to combine kernel design with performance tuning in order to achieve the best possible performance. We compare our approaches against state-of-the-art CPU solutions as well as GPU-based solutions using existing libraries, and show that, on a K40c GPU for example, our kernels are more than 2 faster.
Original language | English |
---|---|
Pages (from-to) | 119-130 |
Number of pages | 12 |
Journal | Procedia Computer Science |
Volume | 80 |
DOIs | |
State | Published - 2016 |
Event | International Conference on Computational Science, ICCS 2016 - San Diego, United States Duration: Jun 6 2016 → Jun 8 2016 |
Funding
This material is based on work supported by NSF under Grants No. CSR 1514286 and ACI-1339822, NVIDIA, and in part by the Russian Scientific Foundation, Agreement N14-11-00190.
Funders | Funder number |
---|---|
National Science Foundation | CSR 1514286, ACI-1339822 |
NVIDIA | |
Russian Science Foundation | N14-11-00190 |
Keywords
- Batched computation
- Cholesky factorization
- GPUs
- Tuning