Abstract
This article studies the I/O write behaviors of the Titan supercomputer and its Lustre parallel file stores under production load. The results can inform the design, deployment, and configuration of file systems along with the design of I/O software in the application, operating system, and adaptive I/O libraries. We propose a statistical benchmarking methodology to measure write performance across I/O configurations, hardware settings, and system conditions. Moreover, we introduce two relative measures to quantify the write-performance behaviors of hardware components under production load. In addition to designing experiments and benchmarking on Titan, we verify the experimental results on one real application and one real application I/O kernel, XGC and HACC IO, respectively. These two are representative and widely used to address the typical I/O behaviors of applications. In summary, we find that Titan's I/O system is variable across the machine at fine time scales. This variability has two major implications. First, stragglers lessen the benefit of coupled I/O parallelism (striping). Peak median output bandwidths are obtained with parallel writes to many independent files, with no striping or write sharing of files across clients (compute nodes). I/O parallelism is most effective when the application'or its I/O libraries'distributes the I/O load so that each target stores files for multiple clients and each client writes files on multiple targets in a balanced way with minimal contention. Second, our results suggest that the potential benefit of dynamic adaptation is limited. In particular, it is not fruitful to attempt to identify “good locations” in the machine or in the file system: component performance is driven by transient load conditions and past performance is not a useful predictor of future performance. For example, we do not observe diurnal load patterns that are predictable.
Original language | English |
---|---|
Article number | 26 |
Journal | ACM Transactions on Storage |
Volume | 15 |
Issue number | 4 |
DOIs | |
State | Published - Jan 16 2020 |
Funding
Bing Xie conducted much of this research as a graduate student at Duke University, with support from Duke University and also from the U.S. National Science Foundation under Grant No. CNS-1245997. The work used resources of the Oak Ridge Leadership Computing Facility, located in the National Center for Computational Sciences at the Oak Ridge National Laboratory, which is supported by the Office of Science of the Department of Energy under Contract No. DE-AC05-00OR22725. The work also used resources of Sandia National Laboratories. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract No. DE-NA0003525. Authors’ addresses: B. Xie, S. Oral, C. Zimmer, J. Y. Choi, S. Klasky, and N. Podhorszki, Oak Ridge National Laboratory, 1 Bethel Valley Rd, Oak Ridge, TN 37830; email: {xieb, oralhs, zimmercj, choij, klasky, pnorbert}@ornl.gov; D. Dillow; email: [email protected]; J. Lofstead, Sandia National Laboratories, 1515 Eubank SE, Albuquerque, NM 87123; email: [email protected]; J. S. Chase, Duke University, D306 Levine Science Research Center, Durham, NC 27708; email: [email protected]. This paper is authored by an employee(s) of the United States Government and is in the public domain. Non-exclusive copying or redistribution is allowed, provided that the article citation is given and the authors and agency are clearly identified as its source. 2020. 1553-3077/2020/01-ART26 $15.00 https://doi.org/10.1145/3335205
Funders | Funder number |
---|---|
Office of Science of the Department of Energy | DE-AC05-00OR22725 |
National Science Foundation | CNS-1245997 |
U.S. Department of Energy | |
National Nuclear Security Administration | DE-NA0003525 |
Sandia National Laboratories | |
Duke University |