Using MPI file caching to improve parallel write performance for large-scale scientific applications

Wei Keng Liao, Avery Ching, Kenin Coloma, Arifa Nisar, Alok Choudhary, Jacqueline Chen, Ramanan Sankaran, Scott Klasky

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

21 Scopus citations

Abstract

Typical large-scale scientific applications periodically write checkpoint files to save the computational state throughout execution. Existing parallel file systems improve such write-only I/O patterns through the use of client-side file caching and write-behind strategies. In distributed environments where files are rarely accessed by more than one client concurrently, file caching has achieved significant success; however, in parallel applications where multiple clients manipulate a shared file, cache coherence control can serialize I/O. We have designed a thread based caching layer for the MPI I/O library, which adds a portable caching system closer to user applications so more information about the application's I/O patterns is available for better coherence control. We demonstrate the impact of our caching solution on parallel write performance with a comprehensive evaluation that includes a set of widely used I/O benchmarks and production application I/O kernels. (c) 2007 ACM.

Original languageEnglish
Title of host publicationProceedings of the 2007 ACM/IEEE Conference on Supercomputing, SC'07
DOIs
StatePublished - 2007
Event2007 ACM/IEEE Conference on Supercomputing, SC'07 - Reno, NV, United States
Duration: Nov 10 2007Nov 16 2007

Publication series

NameProceedings of the 2007 ACM/IEEE Conference on Supercomputing, SC'07

Conference

Conference2007 ACM/IEEE Conference on Supercomputing, SC'07
Country/TerritoryUnited States
CityReno, NV
Period11/10/0711/16/07

Fingerprint

Dive into the research topics of 'Using MPI file caching to improve parallel write performance for large-scale scientific applications'. Together they form a unique fingerprint.

Cite this