Abstract
Current trends in high performance computing suggest that users will soon have widespread access to clusters of multiprocessors with hundreds, if not thousands, of processors. This unprecedented degree of parallelism will undoubtedly expose scalability limitations in existing applications, where scalability is the ability of a parallel algorithm on a parallel architecture to effectively utilize an increasing number of processors. Users will need precise and automated techniques for detecting the cause of limited scalability. This paper addresses this dilemma. First, we argue that users face numerous challenges in understanding application scalability: managing substantial amounts of experiment data, extracting useful trends from this data, and reconciling performance information with their application's design. Second, we propose a solution to automate this data analysis problem by applying fundamental statistical techniques to scalability experiment data. Finally, we evaluate our operational prototype o n several applications, and show that statistical techniques offer an effective strategy for assessing application scalability. In particular, we find that non-parametric correlation of the number of tasks to the ratio of the time for communication operations to overall communication time provides a reliable measure for identifying communication operations that scale poorly.
Original language | English |
---|---|
Pages | 123-132 |
Number of pages | 10 |
State | Published - 2001 |
Externally published | Yes |
Event | 8th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming - Snowbird, UT, United States Duration: Jun 18 2001 → Jun 20 2001 |
Conference
Conference | 8th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming |
---|---|
Country/Territory | United States |
City | Snowbird, UT |
Period | 06/18/01 → 06/20/01 |