TY - GEN
T1 - Weighted dynamic scheduling with many parallelism grains for offloading of numerical workloads to multiple varied accelerators
AU - Haidar, Azzam
AU - Jia, Yulu
AU - Luszczek, Piotr
AU - Tomov, Stanimire
AU - Yar Khan, Asim
AU - Dongarra, Jack
N1 - Publisher Copyright:
© 2015 ACM.
PY - 2015/11/15
Y1 - 2015/11/15
N2 - A wide variety of heterogeneous compute resources are available to modern computers, including multiple sockets containing multicore CPUs, one-or-more GPUS of varying power, and coprocessors such as the Intel Xeon Phi. The challenge faced by domain scientists is how to efficiently and productively use these varied resources. For example, in order to use GPUS effectively, the workload must have a greater degree of parallelism than a workload designed for a multicore-CPU. The domain scientist would have to design and schedule an application in multiple degrees of parallelism and task grain sizes in order to obtain efficient performance from the resources. We propose a productive programming model starting from serial code, which achieves parallelism and scalability by using a task-superscalar runtime environment to adapt the computation to the available resources. The adaptation is done at multiple points, including multi-level data partitioning, adaptive task grain sizes, and dynamic task scheduling. The effectiveness of this approach for utilizing multi-way heterogeneous hardware resources is demonstrated by implementing dense linear algebra applications.
AB - A wide variety of heterogeneous compute resources are available to modern computers, including multiple sockets containing multicore CPUs, one-or-more GPUS of varying power, and coprocessors such as the Intel Xeon Phi. The challenge faced by domain scientists is how to efficiently and productively use these varied resources. For example, in order to use GPUS effectively, the workload must have a greater degree of parallelism than a workload designed for a multicore-CPU. The domain scientist would have to design and schedule an application in multiple degrees of parallelism and task grain sizes in order to obtain efficient performance from the resources. We propose a productive programming model starting from serial code, which achieves parallelism and scalability by using a task-superscalar runtime environment to adapt the computation to the available resources. The adaptation is done at multiple points, including multi-level data partitioning, adaptive task grain sizes, and dynamic task scheduling. The effectiveness of this approach for utilizing multi-way heterogeneous hardware resources is demonstrated by implementing dense linear algebra applications.
KW - Dataflow scheduling
KW - Hardware accelerators
KW - Multi-grain parallelism
UR - http://www.scopus.com/inward/record.url?scp=84968562116&partnerID=8YFLogxK
U2 - 10.1145/2832080.2832085
DO - 10.1145/2832080.2832085
M3 - Conference contribution
AN - SCOPUS:84968562116
T3 - Proceedings of ScalA 2015: 6th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems - Held in conjunction with SC 2015: The International Conference for High Performance Computing, Networking, Storage and Analysis
BT - Proceedings of ScalA 2015
PB - Association for Computing Machinery, Inc
T2 - 6th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems, ScalA 2015
Y2 - 15 November 2015 through 20 November 2015
ER -