Abstract
Scientific applications that run on leadership computing facilities often face the challenge of being unable to fit leading science cases onto accelerator devices due to memory constraints (memory-bound applications). In this work, the authors studied one such US Department of Energy mission-critical condensed matter physics application, Dynamical Cluster Approximation (DCA++), and this paper discusses how device memory-bound challenges were successfully reduced by proposing an effective "all-to-all"communication method - -a ring communication algorithm. This implementation takes advantage of acceleration on GPUs and remote direct memory access (RDMA) for fast data exchange between GPUs. Additionally, the ring algorithm was optimized with sub-ring communicators and multi-threaded support to further reduce communication overhead and expose more concurrency, respectively. The computation and communication were also analyzed by using the Autonomic Performance Environment for Exascale (APEX) profiling tool, and this paper further discusses the performance trade-off for the ring algorithm implementation. The memory analysis on the ring algorithm shows that the allocation size for the authors' most memory-intensive data structure per GPU is now reduced to 1/p of the original size, where p is the number of GPUs in the ring communicator. The communication analysis suggests that the distributed Quantum Monte Carlo execution time grows linearly as sub-ring size increases, and the cost of messages passing through the network interface connector could be a limiting factor.
Original language | English |
---|---|
Title of host publication | Proceedings of the Platform for Advanced Scientific Computing Conference, PASC 2021 |
Publisher | Association for Computing Machinery, Inc |
ISBN (Electronic) | 9781450385633 |
DOIs | |
State | Published - Jul 5 2021 |
Event | 2021 Platform for Advanced Scientific Computing Conference, PASC 2021 - Virtual, Online, Switzerland Duration: Jul 5 2021 → Jul 9 2021 |
Publication series
Name | Proceedings of the Platform for Advanced Scientific Computing Conference, PASC 2021 |
---|
Conference
Conference | 2021 Platform for Advanced Scientific Computing Conference, PASC 2021 |
---|---|
Country/Territory | Switzerland |
City | Virtual, Online |
Period | 07/5/21 → 07/9/21 |
Funding
Authors would like to thank Thomas Maier (ORNL), Giovanni Balduzzi (ETH Zurich) for their insights during the optimization phase of DCA++. This work was supported by the Scientific Discovery through Advanced Computing (SciDAC) program funded by U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research (ASCR) and Basic Energy Sciences (BES) Division of Materials Sciences and Engineering, as well as the RAPIDS SciDAC Institute for Computer Science and Data under subcontract 4000159855 from ORNL. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725, and Center for Computation & Technology at Louisiana State University. This work was supported by the Scientific Discovery through Advanced Computing (SciDAC) program funded by U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research (ASCR) and Basic Energy Sciences (BES) Division of Materials Sciences and Engineering, as well as the RAPIDS SciDAC Institute for Computer Science and Data under subcontract 4000159855 from ORNL. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725, and Center for Computation & Technology at Louisiana State University.
Keywords
- DCA++
- Exascale machines
- GPU remote direct memory access
- Memory-bound issue
- Quantum Monte Carlo