TY - JOUR
T1 - Application of performance portability solutions for GPUs and many-core CPUs to track reconstruction kernels
AU - Kwok, Ka Hei Martin
AU - Kortelainen, Matti
AU - Cerati, Giuseppe
AU - Strelchenko, Alexei
AU - Gutsche, Oliver
AU - Hall, Allison Reinsvold
AU - Lantz, Steve
AU - Reid, Michael
AU - Riley, Daniel
AU - Berkman, Sophie
AU - Lee, Seyong
AU - Ather, Hammad
AU - Norris, Boyana
AU - Wang, Cong
N1 - Publisher Copyright:
© The Authors, published by EDP Sciences, 2024.
PY - 2024/5/6
Y1 - 2024/5/6
N2 - Next generation High-Energy Physics (HEP) experiments are presented with significant computational challenges, both in terms of data volume and processing power. Using compute accelerators, such as GPUs, is one of the promising ways to provide the necessary computational power to meet the challenge. The current programming models for compute accelerators often involve using architecture-specific programming languages promoted by the hardware vendors and hence limit the set of platforms that the code can run on. Developing software with platform restrictions is especially unfeasible for HEP communities as it takes significant effort to convert typical HEP algorithms into ones that are efficient for compute accelerators. Multiple performance portability solutions have recently emerged and provide an alternative path for using compute accelerators, which allow the code to be executed on hardware from different vendors. We apply several portability solutions, such as Kokkos, SYCL, C++17 std::execution::par, Alpaka, and OpenMP/OpenACC, on two mini-apps extracted from the mkFit project: p2z and p2r. These apps include basic kernels for a Kalman filter track fit, such as propagation and update of track parameters, for detectors at a fixed z or fixed r position, respectively. The two mini-apps explore different memory layout formats. We report on the development experience with different portability solutions, as well as their performance on GPUs and many-core CPUs, measured as the throughput of the kernels from different GPU and CPU vendors such as NVIDIA, AMD and Intel.
AB - Next generation High-Energy Physics (HEP) experiments are presented with significant computational challenges, both in terms of data volume and processing power. Using compute accelerators, such as GPUs, is one of the promising ways to provide the necessary computational power to meet the challenge. The current programming models for compute accelerators often involve using architecture-specific programming languages promoted by the hardware vendors and hence limit the set of platforms that the code can run on. Developing software with platform restrictions is especially unfeasible for HEP communities as it takes significant effort to convert typical HEP algorithms into ones that are efficient for compute accelerators. Multiple performance portability solutions have recently emerged and provide an alternative path for using compute accelerators, which allow the code to be executed on hardware from different vendors. We apply several portability solutions, such as Kokkos, SYCL, C++17 std::execution::par, Alpaka, and OpenMP/OpenACC, on two mini-apps extracted from the mkFit project: p2z and p2r. These apps include basic kernels for a Kalman filter track fit, such as propagation and update of track parameters, for detectors at a fixed z or fixed r position, respectively. The two mini-apps explore different memory layout formats. We report on the development experience with different portability solutions, as well as their performance on GPUs and many-core CPUs, measured as the throughput of the kernels from different GPU and CPU vendors such as NVIDIA, AMD and Intel.
UR - http://www.scopus.com/inward/record.url?scp=85212218503&partnerID=8YFLogxK
U2 - 10.1051/epjconf/202429511003
DO - 10.1051/epjconf/202429511003
M3 - Conference article
AN - SCOPUS:85212218503
SN - 2101-6275
VL - 295
JO - EPJ Web of Conferences
JF - EPJ Web of Conferences
M1 - 11003
T2 - 26th International Conference on Computing in High Energy and Nuclear Physics, CHEP 2023
Y2 - 8 May 2023 through 12 May 2023
ER -