TY - JOUR
T1 - DAGuE
T2 - A generic distributed DAG engine for High Performance Computing
AU - Bosilca, George
AU - Bouteiller, Aurelien
AU - Danalis, Anthony
AU - Herault, Thomas
AU - Lemarinier, Pierre
AU - Dongarra, Jack
PY - 2012/1
Y1 - 2012/1
N2 - The frenetic development of the current architectures places a strain on the current state-of-the-art programming environments. Harnessing the full potential of such architectures is a tremendous task for the whole scientific computing community. We present DAGuE a generic framework for architecture aware scheduling and management of micro-tasks on distributed many-core heterogeneous architectures. Applications we consider can be expressed as a Direct Acyclic Graph of tasks with labeled edges designating data dependencies. DAGs are represented in a compact, problem-size independent format that can be queried on-demand to discover data dependencies, in a totally distributed fashion. DAGuE assigns computation threads to the cores, overlaps communications and computations and uses a dynamic, fully-distributed scheduler based on cache awareness, data-locality and task priority. We demonstrate the efficiency of our approach, using several micro-benchmarks to analyze the performance of different components of the framework, and a linear algebra factorization as a use case.
AB - The frenetic development of the current architectures places a strain on the current state-of-the-art programming environments. Harnessing the full potential of such architectures is a tremendous task for the whole scientific computing community. We present DAGuE a generic framework for architecture aware scheduling and management of micro-tasks on distributed many-core heterogeneous architectures. Applications we consider can be expressed as a Direct Acyclic Graph of tasks with labeled edges designating data dependencies. DAGs are represented in a compact, problem-size independent format that can be queried on-demand to discover data dependencies, in a totally distributed fashion. DAGuE assigns computation threads to the cores, overlaps communications and computations and uses a dynamic, fully-distributed scheduler based on cache awareness, data-locality and task priority. We demonstrate the efficiency of our approach, using several micro-benchmarks to analyze the performance of different components of the framework, and a linear algebra factorization as a use case.
KW - Architecture aware scheduling
KW - HPC
KW - Heterogeneous architectures
KW - Micro-task DAG
UR - http://www.scopus.com/inward/record.url?scp=84655174868&partnerID=8YFLogxK
U2 - 10.1016/j.parco.2011.10.003
DO - 10.1016/j.parco.2011.10.003
M3 - Article
AN - SCOPUS:84655174868
SN - 0167-8191
VL - 38
SP - 37
EP - 51
JO - Parallel Computing
JF - Parallel Computing
IS - 1-2
ER -