TY - GEN
T1 - Unified Communication Optimization Strategies for Sparse Triangular Solver on CPU and GPU Clusters
AU - Liu, Yang
AU - Ding, Nan
AU - Sao, Piyush
AU - Williams, Samuel
AU - Li, Xiaoye Sherry
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023
Y1 - 2023
N2 - This paper presents a unified communication optimization frame-work for sparse triangular solve (SpTRSV) algorithms on CPU and GPU clusters. The framework builds upon a 3D communication-avoiding (CA) layout of Px× Py× Pz processes that divides a sparse matrix into Pz submatrices, each handled by a Px× Py2D grid with block-cyclic distribution. We propose three communication optimization strategies: First, a new 3D SpTRSV algorithm is developed, which trades the inter-grid communication and synchronization with replicated computation. This design requires only one inter-grid synchronization, and the inter-grid communication is efficiently implemented with sparse allreduce operations. Second, broadcast and reduction communication trees are used to reduce message latency of the intra-grid 2D communication on CPU clus-ters. Finally, we leverage GPU-initiated one-sided communication to implement the communication trees on GPU clusters. With these nested inter- and intra-grid communication optimization strategies, the proposed 3D SpTRSV algorithm can attain up to 3.45x speedups compared to the baseline 3D SpTRSV algorithm using up to 2048 Cori Haswell CPU cores. In addition, the proposed GPU 3D Sp-TRSV algorithm can achieve up to 6.5x speedups compared to the proposed CPU 3D SpTRSV algorithm with Pz up to 64. Finally it is remarkable that the proposed GPU 3D SpTRSV can scale to 256 GPUs using the Perlmutter system while the existing 2D SpTRSV algorithm can only scale up to 4 GPUs.
AB - This paper presents a unified communication optimization frame-work for sparse triangular solve (SpTRSV) algorithms on CPU and GPU clusters. The framework builds upon a 3D communication-avoiding (CA) layout of Px× Py× Pz processes that divides a sparse matrix into Pz submatrices, each handled by a Px× Py2D grid with block-cyclic distribution. We propose three communication optimization strategies: First, a new 3D SpTRSV algorithm is developed, which trades the inter-grid communication and synchronization with replicated computation. This design requires only one inter-grid synchronization, and the inter-grid communication is efficiently implemented with sparse allreduce operations. Second, broadcast and reduction communication trees are used to reduce message latency of the intra-grid 2D communication on CPU clus-ters. Finally, we leverage GPU-initiated one-sided communication to implement the communication trees on GPU clusters. With these nested inter- and intra-grid communication optimization strategies, the proposed 3D SpTRSV algorithm can attain up to 3.45x speedups compared to the baseline 3D SpTRSV algorithm using up to 2048 Cori Haswell CPU cores. In addition, the proposed GPU 3D Sp-TRSV algorithm can achieve up to 6.5x speedups compared to the proposed CPU 3D SpTRSV algorithm with Pz up to 64. Finally it is remarkable that the proposed GPU 3D SpTRSV can scale to 256 GPUs using the Perlmutter system while the existing 2D SpTRSV algorithm can only scale up to 4 GPUs.
KW - NVSH-MEM
KW - SpTRSV
KW - communication optimization
KW - communication-avoiding algorithm
KW - sparse matrix
KW - supernodal method
KW - triangular solve
UR - http://www.scopus.com/inward/record.url?scp=85190400238&partnerID=8YFLogxK
U2 - 10.1145/3581784.3607092
DO - 10.1145/3581784.3607092
M3 - Conference contribution
AN - SCOPUS:85190400238
T3 - International Conference for High Performance Computing, Networking, Storage and Analysis, SC
BT - SC 2023 - International Conference for High Performance Computing, Networking, Storage and Analysis
PB - IEEE Computer Society
T2 - 2023 International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2023
Y2 - 12 November 2023 through 17 November 2023
ER -