Abstract
Processing large quantities of data is a common scenario for parallel applications. While distributed memory applications are able to improve the performance of their I/O operations by using parallel I/O libraries, there is no support for parallel I/O operations for applications using shared-memory programming models such as OpenMP available as of today. This paper presents parallel I/O interfaces for OpenMP. We discuss the rationale of our design decisions, present the interface specification, an implementation within the OpenUH compiler and discuss a number of optimizations performed. We demonstrate the benefits of this approach on different file systems for multiple benchmarks and application scenarios. In most cases, we observe significant improvements in I/O performance as compared to the sequential version. Furthermore, we perform a comparison of the OpenMP I/O functions introduced in this paper to message passing interface I/O, and demonstrate the benefits of the new interfaces.
Original language | English |
---|---|
Pages (from-to) | 286-309 |
Number of pages | 24 |
Journal | International Journal of Parallel Programming |
Volume | 43 |
Issue number | 2 |
DOIs | |
State | Published - Apr 2015 |
Externally published | Yes |
Funding
We would like to thank the Technical University of Dresden for giving us access to the Atlas cluster and their parallel file system. Partial support for this work was provided by the National Science Foundation’s Computer Systems Research program under Award No. CRI-0958464. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
Funders | Funder number |
---|---|
National Science Foundation | CRI-0958464 |
Keywords
- OpenMP
- Parallel I/O
- Shared memory system