Abstract
Execution of heterogeneous workflows on high-performance computing (HPC) platforms present unprecedented resource management and execution coordination challenges for runtime systems. Task heterogeneity increases the complexity of resource and execution management, limiting the scalability and efficiency of workflow execution. Resource partitioning and distribution of tasks execution over portioned resources promises to address those problems but we lack an experimental evaluation of its performance at scale. This paper provides a performance evaluation of the Process Management Interface for Exascale (PMIx) and its reference implementation PRRTE on the leadership-class HPC platform Summit, when integrated into a pilot-based runtime system called RADICAL-Pilot. We partition resources across multiple PRRTE Distributed Virtual Machine (DVM) environments, responsible for launching tasks via the PMIx interface. We experimentally measure the workload execution performance in terms of task scheduling/launching rate and distribution of DVM task placement times, DVM startup and termination overheads on the Summit leadership-class HPC platform. Integrated solution with PMIx/PRRTE enables using an abstracted, standardized set of interfaces for orchestrating the launch process, dynamic process management and monitoring capabilities. It extends scaling capabilities allowing to overcome a limitation of other launching mechanisms (e.g., JSM/LSF). Explored different DVM setup configurations provide insights on DVM performance and a layout to leverage it. Our experimental results show that heterogeneous workload of 65,500 tasks on 2048 nodes, and partitioned across 32 DVMs, runs steady with resource utilization not lower than 52 %. While having less concurrently executed tasks resource utilization is able to reach up to 85 %, based on results of heterogeneous workload of 8200 tasks on 256 nodes and 2 DVMs.
Original language | English |
---|---|
Title of host publication | Job Scheduling Strategies for Parallel Processing - 25th International Workshop, JSSPP 2022, Revised Selected Papers |
Editors | Dalibor Klusácek, Corbalán Julita, Gonzalo P. Rodrigo |
Publisher | Springer Science and Business Media Deutschland GmbH |
Pages | 88-107 |
Number of pages | 20 |
ISBN (Print) | 9783031226977 |
DOIs | |
State | Published - 2023 |
Event | 25th International Workshop on Job Scheduling Strategies for Parallel Processing, JSSPP 2022, held in conjunction with the 36th IEEE International Parallel and Distributed Processing Symposium, IPDPS 2022 - Virtual, Online Duration: Jun 3 2022 → Jun 3 2022 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 13592 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | 25th International Workshop on Job Scheduling Strategies for Parallel Processing, JSSPP 2022, held in conjunction with the 36th IEEE International Parallel and Distributed Processing Symposium, IPDPS 2022 |
---|---|
City | Virtual, Online |
Period | 06/3/22 → 06/3/22 |
Funding
Acknowledgments. We would like to thank other members of the PMIx community, and Ralph Castain in particular, for the excellent work that we build upon. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. This work is also supported by the ExaWorks project (part of the Exascale Computing Project (ECP)) under DOE Contract No. DE-SC0012704 and by the DOE HEP Center for Computational Excellence at Brookhaven National Laboratory under B&R KA2401045. We also acknowledge DOE INCITE awards for allocations on Summit. We would like to thank other members of the PMIx community, and Ralph Castain in particular, for the excellent work that we build upon. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. This work is also supported by the ExaWorks project (part of the Exascale Computing Project (ECP)) under DOE Contract No. DE-SC0012704 and by the DOE HEP Center for Computational Excellence at Brookhaven National Laboratory under B&R KA2401045. We also acknowledge DOE INCITE awards for allocations on Summit.
Keywords
- High performance computing
- Middleware
- Resource management
- Runtime environment
- Runtime system