Performance implications of architectural and software techniques on I/O-intensive applications

Meenakshi A. Kandaswamy, Mahmut Kandemir, Alok Choudhary, David E. Bernholdt

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

6 Scopus citations

Abstract

Many large scale applications, have significant I/O requirements as well as computational and memory requirements. Unfortunately, limited number of I/O nodes provided by the contemporary message-passing distributed-memory architectures such as Intel Paragon and IBM SP-2 limits the I/O performance of these applications severely. In this paper, we examine some software optimization techniques and architectural scalability and evaluate the effect of them in five I/O intensive applications from both small and large application domains. Our goals in this study are twofold: First, we want to understand the behavior of large-scale data intensive applications and the impact of I/O subsystem on their performance and vice-versa. Second, and more importantly, we strive to determine the solutions for improving the applications' performance by a mix of architectural and software solutions. Our results reveal that the different applications can benefit from different optimizations. For example, we found that some applications benefit from file layout optimizations whereas some others benefit from collective I/O. A combination of architectural and software solutions is normally needed to obtain good I/O performance. For example, we show that with limited number of I/O resources, it is possible to obtain good performance by using appropriate software optimizations. We also show that beyond a certain level, imbalance in the architecture results in performance degradation even when using optimized software, thereby indicating the necessity of increase in I/O resources.

Original languageEnglish
Title of host publicationProceedings - 1998 International Conference on Parallel Processing, ICPP 1998
EditorsTen H. Lai
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages493-500
Number of pages8
ISBN (Electronic)0818686502
DOIs
StatePublished - 1998
Externally publishedYes
Event1998 International Conference on Parallel Processing, ICPP 1998 - Minneapolis, United States
Duration: Aug 10 1998Aug 14 1998

Publication series

NameProceedings of the International Conference on Parallel Processing
ISSN (Print)0190-3918

Conference

Conference1998 International Conference on Parallel Processing, ICPP 1998
Country/TerritoryUnited States
CityMinneapolis
Period08/10/9808/14/98

Funding

A modified version of SCF 1.1 and SCF 3.0, as developed and distributed by Pacific Northwest National Laboratory, P. O. Box 999, Richland, Washington 99352, USA, and funded by the U. S. Department of Energy, was used to obtain some of these results. We also thank Evgenia Smirni for her help in using the Pablo instrumentation library. We would like to thank Rajeev Thakur for his assistance in running BTIO. We are grateful to Caltech and Argonne National Laboratory for allowing us to use their parallel machines for conducting our experiments. This work was supported in part by NSF Young Investigator Award CCR- 9357840, NSF CCR-9509143, NSF ASC-9707074, Sandia National Labs Contract AV-6193, and in part by the Scalable I/O Initiative, contract number DABT63-94-C-0049 from Defense Advanced Research Projects Agency(DARPA) administered by US Army at Fort Huachuca. Dr. D.E.Bernholdt was supported by the Alex G. Nason Fellowship at Syracuse University.

FundersFunder number
Defense Advanced Research Projects Agency
U. S. Department of Energy
National Science FoundationAV-6193, ASC-9707074, CCR-9509143, CCR- 9357840, DABT63-94-C-0049
Defense Advanced Research Projects Agency
Argonne National Laboratory
Syracuse University
Pacific Northwest National Laboratory
Norsk Sykepleierforbund

    Fingerprint

    Dive into the research topics of 'Performance implications of architectural and software techniques on I/O-intensive applications'. Together they form a unique fingerprint.

    Cite this