TY - GEN
T1 - Efficient Parallel Sparse Symmetric Tucker Decomposition for High-Order Tensors
AU - Shivakumar, Shruti
AU - Li, Jiajia
AU - Kannan, Ramakrishnan
AU - Aluru, Srinivas
N1 - Publisher Copyright:
© 2021 by SIAM.
PY - 2021
Y1 - 2021
N2 - Tensor based methods are receiving renewed attention in recent years due to their prevalence in diverse real-world applications. There is considerable literature on tensor representations and algorithms for tensor decompositions, both for dense and sparse tensors. Many applications in hypergraph analytics, machine learning, psychometry, and signal processing result in tensors that are both sparse and symmetric, making it an important class for further study. Similar to the critical Tensor Times Matrix chain operation (TTMc) in general sparse tensors, the Sparse Symmetric Tensor Times Same Matrix chain (S3TTMc) operation is compute and memory intensive due to high tensor order and the associated factorial explosion in the number of non-zeros. In this work, we present a novel compressed storage format CSS for sparse symmetric tensors, along with an efficient parallel algorithm for the S3TTMc operation. We theoretically establish that S3TTMc on CSS achieves a better memory versus run-time trade-off compared to state-of-the-art implementations. We demonstrate experimental findings that confirm these results and achieve up to 2.9× speedup on synthetic and real datasets.
AB - Tensor based methods are receiving renewed attention in recent years due to their prevalence in diverse real-world applications. There is considerable literature on tensor representations and algorithms for tensor decompositions, both for dense and sparse tensors. Many applications in hypergraph analytics, machine learning, psychometry, and signal processing result in tensors that are both sparse and symmetric, making it an important class for further study. Similar to the critical Tensor Times Matrix chain operation (TTMc) in general sparse tensors, the Sparse Symmetric Tensor Times Same Matrix chain (S3TTMc) operation is compute and memory intensive due to high tensor order and the associated factorial explosion in the number of non-zeros. In this work, we present a novel compressed storage format CSS for sparse symmetric tensors, along with an efficient parallel algorithm for the S3TTMc operation. We theoretically establish that S3TTMc on CSS achieves a better memory versus run-time trade-off compared to state-of-the-art implementations. We demonstrate experimental findings that confirm these results and achieve up to 2.9× speedup on synthetic and real datasets.
UR - http://www.scopus.com/inward/record.url?scp=85136332829&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85136332829
T3 - SIAM Conference on Applied and Computational Discrete Algorithms, ACDA 2021
SP - 193
EP - 204
BT - SIAM Conference on Applied and Computational Discrete Algorithms, ACDA 2021
A2 - Bender, Michael A.
A2 - Gilbert, John R.
A2 - Hendrickson, Bruce
A2 - Blair, Sullivan D.
PB - Society for Industrial and Applied Mathematics Publications
T2 - 1st SIAM Conference on Applied and Computational Discrete Algorithms, ACDA 2021
Y2 - 19 July 2021 through 21 July 2021
ER -