OpenMP application experiences: Porting to accelerated nodes

Seonmyeong Bak, Colleen Bertoni, Swen Boehm, Reuben Budiardja, Barbara M. Chapman, Johannes Doerfert, Markus Eisenbach, Hal Finkel, Oscar Hernandez, Joseph Huber, Shintaro Iwasaki, Vivek Kale, Paul R.C. Kent, Jae Hyuk Kwack, Meifeng Lin, Piotr Luszczek, Ye Luo, Buu Pham, Swaroop Pophale, Kiran RavikumarVivek Sarkar, Thomas Scogland, Shilei Tian, P. K. Yeung

Research output: Contribution to journalArticlepeer-review

26 Scopus citations

Abstract

As recent enhancements to the OpenMP specification become available in its implementations, there is a need to share the results of experimentation in order to better understand the OpenMP implementation's behavior in practice, to identify pitfalls, and to learn how the implementations can be effectively deployed in scientific codes. We report on experiences gained and practices adopted when using OpenMP to port a variety of ECP applications, mini-apps and libraries based on different computational motifs to accelerator-based leadership-class high-performance supercomputer systems at the United States Department of Energy. Additionally, we identify important challenges and open problems related to the deployment of OpenMP. Through our report of experiences, we find that OpenMP implementations are successful on current supercomputing platforms and that OpenMP is a promising programming model to use for applications to be run on emerging and future platforms with accelerated nodes.

Original languageEnglish
Article number102856
JournalParallel Computing
Volume109
DOIs
StatePublished - Mar 2022

Funding

This work was funded in part by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration, in particular its subproject on Scaling OpenMP with LLVM for Exascale performance and portability (SOLLVE). This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan). The development of some of the numerical software libraries tested in this work was supported by the National Science Foundation under OAC grant No. 2004541. This work was supported in part by the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. This work was funded in part by the Exascale Computing Project ( 17-SC-20-SC ), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration, in particular its subproject on Scaling OpenMP with LLVM for Exascale performance and portability (SOLLVE). This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy . The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan ( http://energy.gov/downloads/doe-public-access-plan ). The development of some of the numerical software libraries tested in this work was supported by the National Science Foundation under OAC grant No. 2004541 . This work was supported in part by the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.

FundersFunder number
DOE Public Access Plan
United States Government
National Science Foundation
U.S. Department of Energy
Office of Advanced Cyberinfrastructure2004541
Office of ScienceDE-AC02-06CH11357
National Nuclear Security AdministrationDE-AC05-00OR22725

    Keywords

    • Accelerators
    • Application porting experiences
    • GAMESS
    • GESTS
    • GenASiS
    • GridQCD
    • High performance computing
    • LSMS
    • OpenMP implementations
    • QMCPACK
    • RAJA
    • SLATE

    Fingerprint

    Dive into the research topics of 'OpenMP application experiences: Porting to accelerated nodes'. Together they form a unique fingerprint.

    Cite this