Scalable PGAS metadata management on extreme scale systems

Daniel Chavarría-Miranda, Khushbu Agarwal, T. P. Straatsma

Research output: Contribution to conferencePaperpeer-review

Abstract

Programming models intended to run on exascale systems have a number of challenges to overcome, specially the sheer size of the system as measured by the number of concurrent software entities created and managed by the underlying runtime. It is clear from the size of these systems that any state maintained by the programming model has to be strictly sub-linear in size, in order not to overwhelm memory usage with pure overhead. A principal feature of Partitioned Global Address Space (PGAS) models is providing easy access to global-view distributed data structures. In order to provide efficient access to these distributed data structures, PGAS models must keep track of metadata such as where array sections are located with respect to processes/threads running on the HPC system. As PGAS models and applications become ubiquitous on very large transpetascale systems, a key component to their performance and scalability will be efficient and judicious use of memory for model overhead (metadata) compared to application data. We present an evaluation of several strategies to manage PGAS metadata that exhibit different space/time tradeoffs. We use two real-world PGAS applications to capture metadata usage patterns and gain insight into their communication behavior.

Original languageEnglish
Pages103-111
Number of pages9
DOIs
StatePublished - 2013
Externally publishedYes
Event13th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing, CCGrid 2013 - Delft, Netherlands
Duration: May 13 2013May 16 2013

Conference

Conference13th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing, CCGrid 2013
Country/TerritoryNetherlands
CityDelft
Period05/13/1305/16/13

Fingerprint

Dive into the research topics of 'Scalable PGAS metadata management on extreme scale systems'. Together they form a unique fingerprint.

Cite this