ISABELA for effective in situ compression of scientific data

Sriram Lakshminarasimhan, Neil Shah, Stephane Ethier, Seung Hoe Ku, C. S. Chang, Scott Klasky, Rob Latham, Rob Ross, Nagiza F. Samatova

Research output: Contribution to journalArticlepeer-review

73 Scopus citations

Abstract

Exploding dataset sizes from extreme-scale scientific simulations necessitates efficient data management and reduction schemes to mitigate I/O costs. With the discrepancy between I/O bandwidth and computational power, scientists are forced to capture data infrequently, thereby making data collection an inherently lossy process. Although data compression can be an effective solution, the random nature of real-valued scientific datasets renders lossless compression routines ineffective. These techniques also impose significant overhead during decompression, making them unsuitable for data analysis and visualization, which require repeated data access.To address this problem, we propose an effective method for In situ Sort-And-B-spline Error-bounded Lossy Abatement (ISABELA) of scientific data that is widely regarded as effectively incompressible. With ISABELA, we apply a pre-conditioner to seemingly random and noisy data along spatial resolution to achieve an accurate fitting model that guarantees a ≥0.99 correlation with the original data. We further take advantage of temporal patterns in scientific data to compress data by ≈ 85%, while introducing only a negligible overhead on simulations in terms of runtime. ISABELA significantly outperforms existing lossy compression methods, such as wavelet compression, in terms of data reduction and accuracy.We extend upon our previous paper by additionally building a communication-free, scalable parallel storage framework on top of ISABELA-compressed data that is ideally suited for extreme-scale analytical processing. The basis for our storage framework is an inherently local decompression method (it need not decode the entire data), which allows for random access decompression and low-overhead task division that can be exploited over heterogeneous architectures. Furthermore, analytical operations such as correlation and query processing run quickly and accurately over data in the compressed space.

Original languageEnglish
Pages (from-to)524-540
Number of pages17
JournalConcurrency and Computation: Practice and Experience
Volume25
Issue number4
DOIs
StatePublished - Feb 2013

Funding

FundersFunder number
National Science Foundation1029711

    Keywords

    • B-spline
    • data-intensive application
    • high performance computing
    • in situ processing
    • lossy compression

    Fingerprint

    Dive into the research topics of 'ISABELA for effective in situ compression of scientific data'. Together they form a unique fingerprint.

    Cite this