Abstract
GPU memory corruption and in particular double-bit errors (DBEs) remain one of the least understood aspects of HPC system reliability. Albeit rare, their occurrences always lead to job termination and can potentially cost thousands of node-hours, either from wasted computations or as the overhead from regular checkpointing needed to minimize the losses. As supercomputers and their components simultaneously grow in scale, density, failure rates, and environmental footprint, the efficiency of HPC operations becomes both an imperative and a challenge. We examine DBEs using system telemetry data and logs collected from the Summit supercomputer, equipped with 27,648 Tesla V100 GPUs with 2nd-generation high-bandwidth memory (HBM2). Using exploratory data analysis and statistical learning, we extract several insights about memory reliability in such GPUs. We find that GPUs with prior DBE occurrences are prone to experience them again due to otherwise harmless factors, correlate this phenomenon with GPU placement, and suggest manufacturing variability as a factor. On the general population of GPUs, we link DBEs to short- and long-term high power consumption modes while finding no significant correlation with higher temperatures. We also show that the workload type can be a factor in memory's propensity to corruption.
| Original language | English |
|---|---|
| Title of host publication | ICS 2024 - Proceedings of the 38th ACM International Conference on Supercomputing |
| Publisher | Association for Computing Machinery |
| Pages | 188-200 |
| Number of pages | 13 |
| ISBN (Electronic) | 9798400706103 |
| DOIs | |
| State | Published - May 30 2024 |
| Event | 38th ACM International Conference on Supercomputing, ICS 2024 - Kyoto, Japan Duration: Jun 4 2024 → Jun 7 2024 |
Publication series
| Name | Proceedings of the International Conference on Supercomputing |
|---|
Conference
| Conference | 38th ACM International Conference on Supercomputing, ICS 2024 |
|---|---|
| Country/Territory | Japan |
| City | Kyoto |
| Period | 06/4/24 → 06/7/24 |
Funding
This work was supported by, and used the resources of, the Oak Ridge Leadership Computing Facility, located in the National Center for Computational Sciences at ORNL, managed by UT Battelle, LLC for the U.S. DOE (under the contract No. DE-AC05-00OR22725). The publisher, by accepting the article for publication, acknowledges that the U.S. Government retains a non-exclusive, paid up, irrevocable, world-wide license to publish or reproduce the published form of the manuscript, or allow others to do so, for U.S. Government purposes. The DOE will provide public access to these results in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan). Smirni and Schmedding were partially supported by National Science Foundation IIS-2130681.
Keywords
- GPU memory failures
- HPC
- data analysis