Toward exascale resilience

Franck Cappello, Al Geist, Bill Gropp, Laxmikant Kale, Bill Kramer, Marc Snir

Research output: Contribution to journalArticlepeer-review

251 Scopus citations

Abstract

Over the past few years resilience has became a major issue for high-performance computing (HPC) systems, in particular in the perspective of large petascale systems and future exascale systems. These systems will typically gather from half a million to several millions of central processing unit (CPU) cores running up to a billion threads. From the current knowledge and observations of existing large systems, it is anticipated that exascale systems will experience various kind of faults many times per day. It is also anticipated that the current approach for resilience, which relies on automatic or application level checkpoint/restart, will not work because the time for checkpointing and restarting will exceed the mean time to failure of a full system. This set of projections leaves the community of fault tolerance for HPC systems with a difficult challenge: finding new approaches, which are possibly radically disruptive, to run applications until their normal termination, despite the essentially unstable nature of exascale systems. Yet, the community has only five to six years to solve the problem. This white paper synthesizes the motivations, observations and research issues considered as determinant of several complimentary experts of HPC in applications, programming models, distributed systems and system management.

Original languageEnglish
Pages (from-to)374-388
Number of pages15
JournalInternational Journal of High Performance Computing Applications
Volume23
Issue number4
DOIs
StatePublished - 2009

Keywords

  • Challenge
  • Exascale
  • Fault tolerance
  • High-performance computing
  • Resilience

Fingerprint

Dive into the research topics of 'Toward exascale resilience'. Together they form a unique fingerprint.

Cite this