Celeritas: accelerating Geant4 with GPUs

Seth R. Johnson, Julien Esseiva, Elliott Biondo, Philippe Canal, Marcel Demarteau, Thomas Evans, Soon Yung Jun, Guilherme Lima, Amanda Lund, Paul Romano, Stefano C. Tognini

Research output: Contribution to journalConference articlepeer-review

Abstract

Celeritas [1] is a new Monte Carlo (MC) detector simulation code designed for computationally intensive applications (specifically, High Luminosity Large Hadron Collider (HL-LHC) simulation) on high-performance heterogeneous architectures. In the past two years Celeritas has advanced from prototyping a GPU-based single physics model in infinite medium to implementing a full set of electromagnetic (EM) physics processes in complex geometries. The current release of Celeritas, version 0.3, has incorporated full device-based navigation, an event loop in the presence of magnetic fields, and detector hit scoring. New functionality incorporates a scheduler to offload electromagnetic physics to the GPU within a Geant4-driven simulation, enabling integration of Celeritas into high energy physics (HEP) experimental frameworks such as CMSSW. On the Summit supercomputer, Celeritas performs EM physics between 6× and 32× faster using the machine's Nvidia GPUs compared to using only CPUs. When running a multithreaded Geant4 ATLAS test beam application with full hadronic physics, using Celeritas to accelerate the EM physics results in an overall simulation speedup of 1.8-2.3× on GPU and 1.2× on CPU.

Original languageEnglish
Article number11005
JournalEPJ Web of Conferences
Volume295
DOIs
StatePublished - May 6 2024
Event26th International Conference on Computing in High Energy and Nuclear Physics, CHEP 2023 - Norfolk, United States
Duration: May 8 2023May 12 2023

Funding

This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research and Office of High Energy Physics, Scientific Discovery through Advanced Computing (SciDAC) program. Work for this paper was supported by Oak Ridge National Laboratory (ORNL), which is managed and operated by UT-Battelle, LLC, for the U.S. Department of Energy (DOE) under Contract No. DE-AC05-00OR22725 and by Fermi National Accelerator Laboratory, managed and operated by Fermi Research Alliance, LLC under Contract No. DEAC02-07CH11359 with the U.S. Department of Energy. This research was supported by the Exascale Computing Project (ECP), project number 17-SC-20-SC. The ECP is a collaborative effort of two DOE organizations, the Office of Science and the National Nuclear Security Administration, that are responsible for the planning and preparation of a capable exascale ecosystem-including software, applications, hardware, advanced system engineering, and early testbed platforms-to support the nation's exascale computing imperative. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231 using NERSC award HEP-ERCAP-0023868.

Fingerprint

Dive into the research topics of 'Celeritas: accelerating Geant4 with GPUs'. Together they form a unique fingerprint.

Cite this