Abstract
Monitoring of High Performance Computing (HPC) platforms is critical to successful operations, can provide insights into performance-impacting conditions, and can inform methodologies for improving science throughput. However, monitoring systems are not generally considered core capabilities in system requirements specifications nor in vendor development strategies. In this paper we present work performed at a number of large-scale HPC sites towards developing monitoring capabilities that fill current gaps in ease of problem identification and root cause discovery. We also present our collective views, based on the experiences presented, on needs and requirements for enabling development by vendors or users of effective sharable end-to-end monitoring capabilities.
Original language | English |
---|---|
Title of host publication | Proceedings - 2018 IEEE International Conference on Cluster Computing, CLUSTER 2018 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 532-542 |
Number of pages | 11 |
ISBN (Electronic) | 9781538683194 |
DOIs | |
State | Published - Oct 29 2018 |
Event | 2018 IEEE International Conference on Cluster Computing, CLUSTER 2018 - Belfast, United Kingdom Duration: Sep 10 2018 → Sep 13 2018 |
Publication series
Name | Proceedings - IEEE International Conference on Cluster Computing, ICCC |
---|---|
Volume | 2018-September |
ISSN (Print) | 1552-5244 |
Conference
Conference | 2018 IEEE International Conference on Cluster Computing, CLUSTER 2018 |
---|---|
Country/Territory | United Kingdom |
City | Belfast |
Period | 09/10/18 → 09/13/18 |
Funding
This research was supported by and used resources of the Argonne Leadership Computing Facility, which is a U.S. Department of Energy Office of Science User Facility operated under contract DE-AC02-06CH11357. This document is approved for release under LA-UR-18-26485. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under Award Number 2015-02674. Contributions to this work were supported by the Swiss National Supercomputing Centre (CSCS). This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility under Contract No. DE-AC05-00OR22725. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. The views expressed in the article do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
Funders | Funder number |
---|---|
Swiss National Supercomputing Centre | |
National Science Foundation | OCI-0725070, ACI-1238993 |
U.S. Department of Energy | DE-AC02-05CH11231, DE-AC02-06CH11357 |
Office of Science | DE-AC05-00OR22725 |
National Nuclear Security Administration | DE-NA0003525 |
Advanced Scientific Computing Research | 2015-02674 |
Keywords
- HPC monitoring
- Monitoring architecture
- System administration