Abstract
It is often the case that MPI-IO performs poorly in a Lustre file system environment, although the reasons for such performance have heretofore not been well understood. We hypothesize that such performance is a direct result of the fundamental assumptions upon which most parallel I/O optimizations are based. In particular, it is almost universally believed that parallel I/O performance is optimized when aggregator processes perform large, contiguous I/O operations in parallel. Our research, however, shows that this approach can actually provide the worst performance in a Lustre environment, and that the best performance may be obtained by performing a large number of small, non-contiguous I/O operations. In this paper, we provide empirical results demonstrating these non-intuitive results and explore the reasons for such unexpected performance. We present our solution to the problem, which is embodied in a user-level library termed Y-Lib, which redistributes the data in a way that conforms much more closely with the Lustre storage architecture than does the data redistribution pattern employed by MPI-IO. We provide a large body of experimental results, taken across two large-scale Lustre installations, demonstrating that Y-Lib outperforms MPI-IO by up to 36% on one system and 1000% on the other. We discuss the factors that impact the performance improvement obtained by Y-Lib, which include the number of aggregator processes and Object Storage Devices, as well as the power of the system's communications infrastructure. We also show that the optimal data redistribution pattern for Y-Lib is dependent upon these same factors.
Original language | English |
---|---|
Pages (from-to) | 1433-1449 |
Number of pages | 17 |
Journal | Concurrency and Computation: Practice and Experience |
Volume | 22 |
Issue number | 11 |
DOIs | |
State | Published - Aug 10 2010 |
Externally published | Yes |
Keywords
- Grid computing
- Object-based file systems
- Parallel I/O