Impacts of Multi-GPU MPI Collective Communications on Large FFT Computation

Alan Ayala, Stanimire Tomov, Xi Luo, Hejer Shaeik, Azzam Haidar, George Bosilca, Jack Dongarra

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

20 Scopus citations

Abstract

Most applications targeting exascale, such as those part of the Exascale Computing Project (ECP), are designed for heterogeneous architectures and rely on the Message Passing Interface (MPI) as their underlying parallel programming model. In this paper we analyze the limitations of collective MPI communication for the computation of fast Fourier transforms (FFTs), which are relied on heavily for large-scale particle simulations. We present experiments made at one of the largest heterogeneous platforms, the Summit supercomputer at ORNL. We discuss communication models from state-of-the-art FFT libraries, and propose a new FFT library, named HEFFTE (Highly Efficient FFTs for Exascale), which supports heterogeneous architectures and yields considerable speedups compared with CPU libraries, while maintaining good weak as well as strong scalability.

Original languageEnglish
Title of host publicationProceedings of ExaMPI 2019
Subtitle of host publicationWorkshop on Exascale MPI - Held in conjunction with SC 2019: The International Conference for High Performance Computing, Networking, Storage and Analysis
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages12-18
Number of pages7
ISBN (Electronic)9781728160092
DOIs
StatePublished - Nov 2019
Externally publishedYes
Event2019 IEEE/ACM Workshop on Exascale MPI, ExaMPI 2019 - Denver, United States
Duration: Nov 17 2019 → …

Publication series

NameProceedings of ExaMPI 2019: Workshop on Exascale MPI - Held in conjunction with SC 2019: The International Conference for High Performance Computing, Networking, Storage and Analysis

Conference

Conference2019 IEEE/ACM Workshop on Exascale MPI, ExaMPI 2019
Country/TerritoryUnited States
CityDenver
Period11/17/19 → …

Funding

ACKNOWLEDGMENT This research was supported by the Exascale Computing Project (ECP), Project Number: 17-SC-20-SC, a collaborative effort of two DOE organizations (the Office of Science and the National Nuclear Security Administration) responsible for the planning and preparation of a capable exascale ecosystem.

FundersFunder number
DOE organizations
Office of Science
National Nuclear Security Administration

    Fingerprint

    Dive into the research topics of 'Impacts of Multi-GPU MPI Collective Communications on Large FFT Computation'. Together they form a unique fingerprint.

    Cite this