Abstract
Starting with the Titan supercomputer (at the Oak Ridge Leadership Computing Facility, OLCF) in 2012, top supercomputers have Increasingly leveraged the performance of GPUs to support large-scale computational science. The current No. 1 machine, the 200 petaflop Summit system at OLCF, is a GPU-based machine. Accelerator-based architectures, however, add additional complexity due to node heterogeneity. To inform procurement decisions, supercomputing centers need the tools to quickly model the impact of changes of the node architectures on application performance. We present AHEAD, a profiling and modeling tool to quantify the impact of intra-node communication mechanism (e.g., PCI or NVLink) on application performance. Our experiments show average weighted relative errors of ~19% and ~23% for five CORAL-2 (a collaboration between multiple US Department of Energy, DOE, labs to procure Exascale systems) and 12 Rodinia benchmarks respectively, without running the applications on the target future node.
Original language | English |
---|---|
Title of host publication | Proceedings - 2019 IEEE 33rd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 583-592 |
Number of pages | 10 |
ISBN (Electronic) | 9781728135106 |
DOIs | |
State | Published - May 2019 |
Event | 33rd IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019 - Rio de Janeiro, Brazil Duration: May 20 2019 → May 24 2019 |
Publication series
Name | Proceedings - 2019 IEEE 33rd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019 |
---|
Conference
Conference | 33rd IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019 |
---|---|
Country/Territory | Brazil |
City | Rio de Janeiro |
Period | 05/20/19 → 05/24/19 |
Funding
This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. This work is supported in part by the Institute for Computing, Information and Cognitive Systems (ICICS) at UBC.
Keywords
- CPU GPU Communication
- CUDA
- GPU
- Heterogeneous Systems
- Performance Analysis
- Predictive Modeling