Provisioning ZFS pools on lustre

Rick Mohr, Adam P. Howard

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

While Lustre’s parallelism, performance, and scalability make it desirable as a storage solution for clusters, its limitations prevent it from being suitable as a general purpose storage for all of a cluster’s needs. In particular, Lustre’s relatively poor performance for small, random I/O as well as metadata-intensive workloads make it less suitable for use as storage for users’ home areas or working directories for compilation. In this paper, an experiment of deploying a ZFS file system using files pre-allocated on a Lustre file system as the ZFS storage pools is presented. While this adds many management options like snapshots and more fine grained controls over users and quotas, we focus on examining how adding this layer can affect the performance of suboptimal Lustre workloads. Benchmarking of the Lustre file system with and without the ZFS file system is performed, and the results are presented and analyzed.

Original languageEnglish
Title of host publicationProceedings of the Practice and Experience in Advanced Research Computing
Subtitle of host publicationRise of the Machines (Learning), PEARC 2019
PublisherAssociation for Computing Machinery
ISBN (Electronic)9781450372275
DOIs
StatePublished - Jul 28 2019
Externally publishedYes
Event2019 Conference on Practice and Experience in Advanced Research Computing: Rise of the Machines (Learning), PEARC 2019 - Chicago, United States
Duration: Jul 28 2019Aug 1 2019

Publication series

NameACM International Conference Proceeding Series

Conference

Conference2019 Conference on Practice and Experience in Advanced Research Computing: Rise of the Machines (Learning), PEARC 2019
Country/TerritoryUnited States
CityChicago
Period07/28/1908/1/19

Keywords

  • Benchmarking
  • Lustre
  • ZFS

Fingerprint

Dive into the research topics of 'Provisioning ZFS pools on lustre'. Together they form a unique fingerprint.

Cite this