TY - GEN
T1 - Enabling and scaling matrix computations on heterogeneous multi-core and multi-GPU systems
AU - Song, Fengguang
AU - Tomov, Stanimire
AU - Dongarra, Jack
PY - 2012
Y1 - 2012
N2 - We present a new approach to utilizing all CPU cores and all GPUs on heterogeneous multicore and multi-GPU systems to support dense matrix computations efficiently. The main idea is that we treat a heterogeneous system as a distributed-memory machine, and use a heterogeneous multi-level block cyclic distribution method to allocate data to the host and multiple GPUs to minimize communication. We design heterogeneous algorithms with hybrid tiles to accommodate the processor heterogeneity, and introduce an auto-tuning method to determine the hybrid tile sizes to attain both high performance and load balancing. We have also implemented a new runtime system and applied it to the Cholesky and QR factorizations. Our approach is designed for achieving four objectives: a high degree of parallelism, minimized synchronization, minimized communication, and load balancing. Our experiments on a compute node (with two Intel Westmere hexa-core CPUs and three Nvidia Fermi GPUs), as well as on up to 100 compute nodes on the Keeneland system [31], demonstrate great scalability, good load balancing, and efficiency of our approach.
AB - We present a new approach to utilizing all CPU cores and all GPUs on heterogeneous multicore and multi-GPU systems to support dense matrix computations efficiently. The main idea is that we treat a heterogeneous system as a distributed-memory machine, and use a heterogeneous multi-level block cyclic distribution method to allocate data to the host and multiple GPUs to minimize communication. We design heterogeneous algorithms with hybrid tiles to accommodate the processor heterogeneity, and introduce an auto-tuning method to determine the hybrid tile sizes to attain both high performance and load balancing. We have also implemented a new runtime system and applied it to the Cholesky and QR factorizations. Our approach is designed for achieving four objectives: a high degree of parallelism, minimized synchronization, minimized communication, and load balancing. Our experiments on a compute node (with two Intel Westmere hexa-core CPUs and three Nvidia Fermi GPUs), as well as on up to 100 compute nodes on the Keeneland system [31], demonstrate great scalability, good load balancing, and efficiency of our approach.
KW - Heterogeneous algorithms
KW - Hybrid CPU-GPU architectures
KW - Numerical linear algebra
KW - Runtime systems
UR - http://www.scopus.com/inward/record.url?scp=84864049244&partnerID=8YFLogxK
U2 - 10.1145/2304576.2304625
DO - 10.1145/2304576.2304625
M3 - Conference contribution
AN - SCOPUS:84864049244
SN - 9781450313162
T3 - Proceedings of the International Conference on Supercomputing
SP - 365
EP - 375
BT - ICS'12 - Proceedings of the 2012 ACM International Conference on Supercomputing
T2 - 26th ACM International Conference on Supercomputing, ICS'12
Y2 - 25 June 2012 through 29 June 2012
ER -