Abstract
The benchmarking effort within the Computational Research & Development Programs at the Oak Ridge National Laboratory (ORNL) seeks to design and enable High Performance Computing (HPC) benchmarks and test suites. The work described in this report is a part of the effort focusing on the comparison and analysis of OpenSHMEM implementations using the Interleave Or Random (IOR) software for benchmarking parallel file system using POSIX, MPIIO, or HDF5 interfaces. We describe the effort to emulate the MPIIO parallel collective capabilities in the IOR benchmark using OpenSHMEM communication. One development effort was in emulating the MPI derived datatype used in the read/write operations and in setting the file view. Another effort was in implementing an internal cache in OpenSHMEM distributed shared memory to facilitate global collective I/O operations. Experiments comparing collective I/O in MPIIO implementations with the OpenSHMEM implementations were performed on the SGI Turing Cluster and the Cray XK7 Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF). The preliminary results suggest that on the Cray XK7 Titan, the MPIIO implementations obtained higher write performance and the OpenSHMEM version obtained slightly higher read performance. On the SGI Turing Cluster, the MPIIO implementations obtained slightly higher performance over the OpenSHMEM implementations on large files.
Original language | English |
---|---|
Place of Publication | United States |
DOIs | |
State | Published - 2016 |
Keywords
- 97 MATHEMATICS AND COMPUTING