Abstract
Performance measurement of parallel algorithms is well studied and well understood. However, a flaw in traditional performance metrics is that they rely on comparisons to serial performance with the same input. This comparison is convenient for theoretical complexity analysis but impossible to perform in large-scale empirical studies with data sizes far too large to run on a single serial computer. Consequently, scaling studies currently rely on ad hoc methods that, although effective, have no grounded mathematical models. In this position paper we advocate using a rate-based model that has a concrete meaning relative to speedup and efficiency and that can be used to unify strong and weak scaling studies.
Original language | English |
---|---|
Pages (from-to) | 488-496 |
Number of pages | 9 |
Journal | Lecture Notes in Computer Science |
Volume | 9137 LNCS |
DOIs | |
State | Published - 2015 |
Externally published | Yes |
Event | 30th International Conference on High Performance Computing, ISC 2015 - Frankfurt, Germany Duration: Jul 12 2015 → Jul 16 2015 |
Funding
This material is based in part upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program under Award Number 12-015215.
Funders | Funder number |
---|---|
U.S. Department of Energy | |
Office of Science | |
Advanced Scientific Computing Research | 12-015215 |