TY - GEN
T1 - Process distance-aware adaptive MPI collective communications
AU - Ma, Teng
AU - Herault, Thomas
AU - Bosilca, George
AU - Dongarra, Jack J.
PY - 2011
Y1 - 2011
N2 - Message Passing Interface (MPI) implementations provide a great flexibility to allow users to arbitrarily bind processes to computing cores to fully exploit clusters of multicore/ many-core nodes. An intelligent process placement can optimize application performance according to underlying hardware architecture and the application's communication pattern. However, such static process placement optimization can't help MPI collective communication, whose topology is dynamic with members in each communicator. Conversely, a mismatch between the collective communication topology, the underlying hardware architecture and the process placement often happens due to the MPI's limited capabilities of dealing with complex environments. This paper proposes an adaptive collective communication framework by combining process distance, underlying hardware topologies, and runtime communicator together. Based on this information, an optimal communication topology will be generated to guarantee maximum bandwidth for each MPI collective operation regardless of process placement. Based on this framework, two distance-aware adaptive intra-node collective operations (Broadcast and All gather) are implemented as examples inside Open MPI's KNEM collective component. The awareness of process distance helps these two operations construct optimal runtime topologies and balance memory accesses across memory nodes. The experiments show these two distance-aware collective operations provide better and more stable performance than current collectives in Open MPI regardless of process placement.
AB - Message Passing Interface (MPI) implementations provide a great flexibility to allow users to arbitrarily bind processes to computing cores to fully exploit clusters of multicore/ many-core nodes. An intelligent process placement can optimize application performance according to underlying hardware architecture and the application's communication pattern. However, such static process placement optimization can't help MPI collective communication, whose topology is dynamic with members in each communicator. Conversely, a mismatch between the collective communication topology, the underlying hardware architecture and the process placement often happens due to the MPI's limited capabilities of dealing with complex environments. This paper proposes an adaptive collective communication framework by combining process distance, underlying hardware topologies, and runtime communicator together. Based on this information, an optimal communication topology will be generated to guarantee maximum bandwidth for each MPI collective operation regardless of process placement. Based on this framework, two distance-aware adaptive intra-node collective operations (Broadcast and All gather) are implemented as examples inside Open MPI's KNEM collective component. The awareness of process distance helps these two operations construct optimal runtime topologies and balance memory accesses across memory nodes. The experiments show these two distance-aware collective operations provide better and more stable performance than current collectives in Open MPI regardless of process placement.
KW - Collective Communication
KW - Hierarchical Algorithm
KW - MPI
KW - Process Distance
KW - Ring Algorithm
UR - http://www.scopus.com/inward/record.url?scp=80955141014&partnerID=8YFLogxK
U2 - 10.1109/CLUSTER.2011.30
DO - 10.1109/CLUSTER.2011.30
M3 - Conference contribution
AN - SCOPUS:80955141014
SN - 9780769545165
T3 - Proceedings - IEEE International Conference on Cluster Computing, ICCC
SP - 196
EP - 204
BT - Proceedings - 2011 IEEE International Conference on Cluster Computing, CLUSTER 2011
T2 - 2011 IEEE International Conference on Cluster Computing, CLUSTER 2011
Y2 - 26 September 2011 through 30 September 2011
ER -