TY - GEN
T1 - Understanding GPU errors on large-scale HPC systems and the implications for system design and operation
AU - Tiwari, Devesh
AU - Gupta, Saurabh
AU - Rogers, James
AU - Maxwell, Don
AU - Rech, Paolo
AU - Vazhkudai, Sudharshan
AU - Oliveira, Daniel
AU - Londo, Dave
AU - Debardeleben, Nathan
AU - Navaux, Philippe
AU - Carro, Luigi
AU - Bland, Arthur
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2015/3/6
Y1 - 2015/3/6
N2 - Increase in graphics hardware performance and improvements in programmability has enabled GPUs to evolve from a graphics-specific accelerator to a general-purpose computing device. Titan, the world's second fastest supercomputer for open science in 2014, consists of more dum 18,000 GPUs that scientists from various domains such as astrophysics, fusion, climate, and combustion use routinely to run large-scale simulations. Unfortunately, while the performance efficiency of GPUs is well understood, their resilience characteristics in a large-scale computing system have not been fully evaluated. We present a detailed study to provide a thorough understanding of GPU errors on a large-scale GPU-enabled system. Our data was collected from the Titan supercomputer at the Oak Ridge Leadership Computing Facility and a GPU cluster at the Los Alamos National Laboratory. We also present results from our extensive neutron-beam tests, conducted at Los Alamos Neutron Science Center (LANSCE) and at ISIS (Rutherford Appleron Laboratories, UK), to measure the resilience of different generations of GPUs. We present several findings from our field data and neutron-beam experiments, and discuss the implications of our results for future GPU architects, current and future HPC computing facilities, and researchers focusing on GPU resilience.
AB - Increase in graphics hardware performance and improvements in programmability has enabled GPUs to evolve from a graphics-specific accelerator to a general-purpose computing device. Titan, the world's second fastest supercomputer for open science in 2014, consists of more dum 18,000 GPUs that scientists from various domains such as astrophysics, fusion, climate, and combustion use routinely to run large-scale simulations. Unfortunately, while the performance efficiency of GPUs is well understood, their resilience characteristics in a large-scale computing system have not been fully evaluated. We present a detailed study to provide a thorough understanding of GPU errors on a large-scale GPU-enabled system. Our data was collected from the Titan supercomputer at the Oak Ridge Leadership Computing Facility and a GPU cluster at the Los Alamos National Laboratory. We also present results from our extensive neutron-beam tests, conducted at Los Alamos Neutron Science Center (LANSCE) and at ISIS (Rutherford Appleron Laboratories, UK), to measure the resilience of different generations of GPUs. We present several findings from our field data and neutron-beam experiments, and discuss the implications of our results for future GPU architects, current and future HPC computing facilities, and researchers focusing on GPU resilience.
UR - http://www.scopus.com/inward/record.url?scp=84934300860&partnerID=8YFLogxK
U2 - 10.1109/HPCA.2015.7056044
DO - 10.1109/HPCA.2015.7056044
M3 - Conference contribution
AN - SCOPUS:84934300860
T3 - 2015 IEEE 21st International Symposium on High Performance Computer Architecture, HPCA 2015
SP - 331
EP - 342
BT - 2015 IEEE 21st International Symposium on High Performance Computer Architecture, HPCA 2015
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2015 21st IEEE International Symposium on High Performance Computer Architecture, HPCA 2015
Y2 - 7 February 2015 through 11 February 2015
ER -