Abstract
Runtime scheduling and workflow systems are an increasingly popular algorithmic component in HPC because they allow full system utilization with relaxed synchronization requirements. There are so many special-purpose tools for task scheduling, one might wonder why more are needed. Use cases seen on the Summit supercomputer needed better integration with MPI and greater flexibility in job launch configurations. Preparation, execution, and analysis of computational chemistry simulations at the scale of tens of thousands of processors revealed three distinct workflow patterns. A separate job scheduler was implemented for each one using extremely simple and robust designs: file-based, task-list based, and bulk-synchronous. Comparing to existing methods shows unique benefits of this work, including simplicity of design, suitability for HPC centers, short startup time, and well-understood per-task overhead. All three new tools have been shown to scale to full utilization of Summit, and have been made publicly available with tests and documentation. This work presents a complete characterization of the minimum effective task granularity for efficient scheduler usage scenarios. These schedulers have the same bottlenecks, and hence similar task granularities as those reported for existing tools following comparable paradigms.
Original language | English |
---|---|
Pages (from-to) | 99-114 |
Number of pages | 16 |
Journal | Software: Practice and Experience |
Volume | 53 |
Issue number | 1 |
DOIs | |
State | Published - Jan 2023 |
Funding
This research was sponsored in part by the Laboratory Directed Research and Development Program at Oak Ridge National Laboratory (ORNL), which is managed by UT‐Battelle, LLC, for the U.S. Department of Energy (DOE) under Contract No. DE‐AC05‐00OR22725. This work also used resources, services, and support provided via the COVID‐19 HPC Consortium (https://covid19‐hpc‐consortium.org/), which is a unique private‐public effort to bring together government, industry, and academic leaders who are volunteering free compute time and resources in support of COVID‐19 research, and used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE‐AC05‐00OR22725.
Keywords
- distributed asynchronous
- runtime scheduling
- task graph
- workflow management systems