Abstract
Linear algebra operations have been widely used in big data analytics and scientific computations. Many works have been done on optimizing linear algebra operations on GPUs with regular-shaped input. However, few works focus on fully utilizing GPU resources when the input is not regular-shaped. Current optimizations do not consider fully utilizing the memory bandwidth and computing power; therefore, they can only achieve sub-optimal performance. In this paper, we propose two efficient algorithms – TSM2R and TSM2L – for two classes of tall-and-skinny matrix–matrix multiplications on GPUs. Both of them focus on optimizing linear algebra operation with at least one of the input matrices tall-and-skinny. Specifically, TSM2R is designed for a large regular-shaped matrix multiplying a tall-and-skinny matrix, while TSM2L is designed for a tall-and-skinny matrix multiplying a small regular-shaped matrix. We implement our proposed algorithms and test on several modern NVIDIA GPU micro-architectures. Experiments show that, compared to the current state-of-the-art works, (1) TSM2R speeds up the computation by 1.6x on average and improves the memory bandwidth utilization and computing power utilization by 18.1% and 20.5% on average, respectively, when the regular-shaped matrix size is relatively large or medium; and (2) TSM2L speeds up the computation by 1.9x on average and improves the memory bandwidth utilization by up to 9.3% on average when the regular-shaped matrix size is relatively small.
Original language | English |
---|---|
Pages (from-to) | 70-85 |
Number of pages | 16 |
Journal | Journal of Parallel and Distributed Computing |
Volume | 151 |
DOIs | |
State | Published - May 2021 |
Funding
This research is supported by the National Science Foundation, USA under Grants OAC-2034169 and OAC-2003624 . We would like to thank the University of Alabama for providing the startup support in this work. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have contributed to the research results reported within this paper. This research is supported by the National Science Foundation, USA under Grants OAC-2034169 and OAC-2003624. We would like to thank the University of Alabama for providing the startup support in this work. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have contributed to the research results reported within this paper.
Funders | Funder number |
---|---|
Texas Advanced Computing Center | |
National Science Foundation | 2034169, 2042084, OAC-2034169, OAC-2003624 |
University of Texas at Austin | |
University of Alabama |
Keywords
- CUDA
- GPU
- Matrix–matrix multiplication
- Performance optimization
- Tall-and-skinny matrix