Abstract
Large scale machine learning (ML) and deep learning (DL) platforms face challenges when integrated with deduplication enabled storage clusters. In the quest to achieve smart and efficient storage utilization, removal of duplicate data introduces bottlenecks, since deduplication alters the I/O transaction layout of the storage system. Therefore, it is critical to address such deduplication overhead for acceleration of ML/DL computation in deduplication storage. Existing state of the art ML/DL storage solutions such as Alluxio and AutoCache adopt non deduplication-aware caching mechanisms, which lacks the much needed performance boost when adopted in deduplication enabled ML/DL clusters. In this paper, we introduce Redup, which eliminates the performance drop caused by enabling deduplication in ML/DL storage clusters. At the core, is a Redup Caching Manager (RDCM), composed of a 2-tier deduplication layout-aware caching mechanism. The RDCM provides an abstraction of the underlying deduplication storage layout to ML/DL applications and provisions a decoupled acceleration of object reconstruction during ML/DL read operations. Our Redup evaluation shows negligible performance drop in ML/DL training performances as compared to a cluster without deduplication, whilst significantly outperforming Alluxio and AutoCache in terms of various performance metrics.
Original language | English |
---|---|
Pages (from-to) | 1622-1636 |
Number of pages | 15 |
Journal | IEEE Transactions on Big Data |
Volume | 8 |
Issue number | 6 |
DOIs | |
State | Published - Dec 1 2022 |
Keywords
- Machine learning
- big data
- deduplication
- deep learning
- storage management