Evaluating high-performance computers

Jeffrey S. Vetter, Bronis R. De Supinski, Lynn Kissel, John May, Sheila Vaidya

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Comparisons of high-performance computers based on their peak floating point performance are common but seldom useful when comparing performance on real workloads. Factors that influence sustained performance extend beyond a system's floating-point units, and real applications exercise machines in complex and diverse ways. Even when it is possible to compare systems based on their performance, other considerations affect which machine is best for a given organization. These include the cost, the facilities requirements (power, floorspace, etc.), the programming model, the existing code base, and so on. This paper describes some of the important measures for evaluating high-performance computers. We present data for many of these metrics based on our experience at Lawrence Livermore National Laboratory (LLNL), and we compare them with published information on the Earth Simulator. We argue that evaluating systems involves far more than comparing benchmarks and acquisition costs. We show that evaluating systems often involves complex choices among a variety of factors that influence the value of a supercomputer to an organization, and that the high-end computing community should view cost/performance comparisons of different architectures with skepticism. Published in 2005 by John Wiley & Sons, Ltd.

Original languageEnglish
Pages (from-to)1239-1270
Number of pages32
JournalConcurrency and Computation: Practice and Experience
Volume17
Issue number10
DOIs
StatePublished - Aug 25 2005

Keywords

  • Computer architecture
  • High-performance computing
  • Parallel and distributed computing
  • Performance evaluation

Fingerprint

Dive into the research topics of 'Evaluating high-performance computers'. Together they form a unique fingerprint.

Cite this