TY - GEN
T1 - GPU-aware non-contiguous data movement in open MPI
AU - Wu, Wei
AU - Bosilca, George
AU - VandeVaart, Rolf
AU - Jeaugey, Sylvain
AU - Dongarra, Jack
N1 - Publisher Copyright:
Copyright © 2016 by the Association for Computing Machinery, Inc. (ACM).
PY - 2016/5/31
Y1 - 2016/5/31
N2 - Due to better parallel density and power efficiency, GPUs have become more popular for use in scientific applications. Many of these applications are based on the ubiquitous Message Passing Interface (MPI) programming paradigm, and take advantage of non-contiguous memory layouts to exchange data between processes. However, support for efficient non-contiguous data movements for GPU-resident data is still in its infancy, imposing a negative impact on the over-all application performance. To address this shortcoming, we present a solution where we take advantage of the inherent parallelism in the datatype packing and unpacking operations. We developed a close integration between Open MPI's stack-based datatype engine, NVIDIA's Unified Memory Architecture and GPUDirect capabilities. In this design the datatype packing and unpacking operations are offloaded onto the GPU and handled by specialized GPU kernels, while the CPU remains the driver for data movements between nodes. By incorporating our design into the Open MPI library we have shown significantly better performance for non-contiguous GPU-resident data transfers on both shared and distributed memory machines.
AB - Due to better parallel density and power efficiency, GPUs have become more popular for use in scientific applications. Many of these applications are based on the ubiquitous Message Passing Interface (MPI) programming paradigm, and take advantage of non-contiguous memory layouts to exchange data between processes. However, support for efficient non-contiguous data movements for GPU-resident data is still in its infancy, imposing a negative impact on the over-all application performance. To address this shortcoming, we present a solution where we take advantage of the inherent parallelism in the datatype packing and unpacking operations. We developed a close integration between Open MPI's stack-based datatype engine, NVIDIA's Unified Memory Architecture and GPUDirect capabilities. In this design the datatype packing and unpacking operations are offloaded onto the GPU and handled by specialized GPU kernels, while the CPU remains the driver for data movements between nodes. By incorporating our design into the Open MPI library we have shown significantly better performance for non-contiguous GPU-resident data transfers on both shared and distributed memory machines.
KW - Datatype
KW - GPU
KW - Hybrid architecture
KW - MPI
KW - Non-contiguous data
UR - http://www.scopus.com/inward/record.url?scp=84978536097&partnerID=8YFLogxK
U2 - 10.1145/2907294.2907317
DO - 10.1145/2907294.2907317
M3 - Conference contribution
AN - SCOPUS:84978536097
T3 - HPDC 2016 - Proceedings of the 25th ACM International Symposium on High-Performance Parallel and Distributed Computing
SP - 231
EP - 242
BT - HPDC 2016 - Proceedings of the 25th ACM International Symposium on High-Performance Parallel and Distributed Computing
PB - Association for Computing Machinery, Inc
T2 - 25th ACM International Symposium on High-Performance Parallel and Distributed Computing, HPDC 2016
Y2 - 31 May 2016 through 4 June 2016
ER -