Performance portability study for massively parallel computational fluid dynamics application on scalable heterogeneous architectures

Research output: Contribution to journalArticlepeer-review

15 Scopus citations

Abstract

Patient-specific hemodynamic simulations have the potential to greatly improve both the diagnosis and treatment of a variety of vascular diseases. Portability will enable wider adoption of computational fluid dynamics (CFD) applications in the biomedical research community and targeting to platforms ideally suited to different vascular regions. In this work, we present a case study in performance portability that assesses (1) the ease of porting an MPI application optimized for one specific architecture to new platforms using variants of hybrid MPI+X programming models; (2) performance portability seen when simulating blood flow in three different vascular regions on diverse heterogeneous architectures; (3) model-based performance prediction for future architectures; and (4) performance scaling of the hybrid MPI+X programming on parallel heterogeneous systems. We discuss the lessons learned in porting HARVEY, a massively parallel CFD application, from traditional multicore CPUs to diverse heterogeneous architectures ranging from NVIDIA/AMD GPUs to Intel MICs and Altera FPGAs.

Original languageEnglish
Pages (from-to)1-13
Number of pages13
JournalJournal of Parallel and Distributed Computing
Volume129
DOIs
StatePublished - Jul 2019

Funding

This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory (ORNL), which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725, and computing facility that came from the Lawrence Livermore National Laboratory (LLNL) Institutional Computing Grand Challenge program. This material is based upon work supported by the U.S. Department of Energy (DOE), Office of Science, Office of Advanced Scientific Computing Research, United States. This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the DOE. The United States Government (USG) retains and the publisher, by accepting the article for publication, acknowledges that the USG retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for USG purposes. The DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan). Research reported in this publication is also supported by the ORNL Joint Faculty Program, United States and the Office of the Director, National Institutes of Health, United States under Award Number DP5OD019876. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory (ORNL), which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725 , and computing facility that came from the Lawrence Livermore National Laboratory (LLNL) Institutional Computing Grand Challenge program. This material is based upon work supported by the U.S. Department of Energy (DOE), Office of Science, Office of Advanced Scientific Computing Research, United States . This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the DOE. The United States Government (USG) retains and the publisher, by accepting the article for publication, acknowledges that the USG retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for USG purposes. The DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan ( http://energy.gov/downloads/doe-public-access-plan ). Research reported in this publication is also supported by the ORNL Joint Faculty Program, United States and the Office of the Director, National Institutes of Health, United States under Award Number DP5OD019876 . The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

FundersFunder number
National Institutes of HealthDP5OD019876
U.S. Department of EnergyDE-AC05-00OR22725
Office of the Director
Office of Science
Advanced Scientific Computing Research
Oak Ridge National Laboratory

    Keywords

    • Computational fluid dynamics
    • Heterogeneous architectures
    • Lattice Boltzmann method
    • OpenACC
    • Patient-specific hemodynamics
    • Performance portability
    • Performance prediction

    Fingerprint

    Dive into the research topics of 'Performance portability study for massively parallel computational fluid dynamics application on scalable heterogeneous architectures'. Together they form a unique fingerprint.

    Cite this