Abstract
This paper examines the explicit communication characteristics of several sophisticated scientific applications, which, by themselves, constitute a representative suite of publicly available benchmarks for large cluster architectures. By focusing on the message passing interface (MPI) and by using hardware counters on the microprocessor, we observe each application's inherent behavioral characteristics: point-to-point and collective communication, and floating-point operations. Furthermore, we explore the sensitivities of these characteristics to both problem size and number of processors. Our analysis reveals several striking similarities across our diverse set of applications including the use of collective operations, especially those collectives with very small data payloads. We also highlight a trend of novel applications parting with regimented, static communication patterns in favor of dynamically evolving patterns, as evidenced by our experiments on applications that use implicit linear solvers and adaptive mesh refinement. Overall, our study contributes a better understanding of the requirements of current and emerging paradigms of scientific computing in terms of their computation and communication demands.
Original language | English |
---|---|
Pages (from-to) | 853-865 |
Number of pages | 13 |
Journal | Journal of Parallel and Distributed Computing |
Volume | 63 |
Issue number | 9 |
DOIs | |
State | Published - Sep 2003 |
Externally published | Yes |
Funding
We thank Mark Seager, Bob Lucas, and the anonymous reviewers for their useful comments. We also thank Andy Wissink for creating the shock tube problem for SAMRAI. The anonymous reviewers also helped improve the quality of the paper. This work was performed under the auspices of the U.S. Dept. of Energy by University of California LLNL under contract W-7405-Eng-48. LLNL Document Number UCRL-JC-143483.
Funders | Funder number |
---|---|
U.S. Dept. of Energy | |
University of California LLNL | W-7405-Eng-48 |