Abstract
The convergence of artificial intelligence, highperformance computing (HPC), and data science brings unique opportunities for marked advance discoveries and that leverage synergies across scientific domains. Recently, deep learning (DL) models have been successfully applied to a wide spectrum of fields, from social network analysis to climate modeling. Such advances greatly benefit from already available HPC infrastructure, mainly GPU-enabled supercomputers. However, those powerful computing systems are exposed to failures, particularly silent data corruption (SDC) in which bit-flips occur without the program crashing. Consequently, exploring the impact of SDCs in DL models is vital for maintaining progress in many scientific domains. This paper uses a distinctive methodology to inject faults into training phases of DL models. We use checkpoint file alteration to study the effect of having bit-flips in different places of a model and at different moments of the training. Our strategy is general enough to allow the analysis of any combination of DL model and framework - so long as they produce a Hierarchical Data Format 5 checkpoint file. The experimental results confirm that popular DL models are often able to absorb dozens of bitflips with a minimal impact on accuracy convergence.
Original language | English |
---|---|
Title of host publication | Proceedings - 2021 IEEE International Conference on Cluster Computing, Cluster 2021 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 492-503 |
Number of pages | 12 |
ISBN (Electronic) | 9781728196664 |
DOIs | |
State | Published - 2021 |
Event | 2021 IEEE International Conference on Cluster Computing, Cluster 2021 - Virtual, Portland, United States Duration: Sep 7 2021 → Sep 10 2021 |
Publication series
Name | Proceedings - IEEE International Conference on Cluster Computing, ICCC |
---|---|
Volume | 2021-September |
ISSN (Print) | 1552-5244 |
Conference
Conference | 2021 IEEE International Conference on Cluster Computing, Cluster 2021 |
---|---|
Country/Territory | United States |
City | Virtual, Portland |
Period | 09/7/21 → 09/10/21 |
Funding
Notice: This manuscript has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).
Keywords
- Checkpoint
- Deep learning
- Fault injection
- HDF5
- High-performance computing
- Neural networks
- Resilience