Revisiting parallel cyclic reduction and parallel prefix-based algorithms for block tridiagonal systems of equations

Sudip K. Seal, Kalyan S. Perumalla, Steven P. Hirshman

Research output: Contribution to journalArticlepeer-review

16 Scopus citations

Abstract

Direct solvers based on prefix computation and cyclic reduction algorithms exploit the special structure of tridiagonal systems of equations to deliver better parallel performance compared to those designed for more general systems of equations. This performance advantage is even more pronounced for block tridiagonal systems. In this paper, we re-examine the performances of these two algorithms taking the effects of block size into account. Depending on the block size, the parameter space spanned by the number of block rows, size of the blocks and the processor count is shown to favor one or the other of the two algorithms. A critical block size that separates these two regions is shown to emerge and its dependence both on problem dependent parameters and on machine-specific constants is established. Empirical verification of these analytical findings is carried out on up to 2048 cores of a Cray XT4 system.

Original languageEnglish
Pages (from-to)273-280
Number of pages8
JournalJournal of Parallel and Distributed Computing
Volume73
Issue number2
DOIs
StatePublished - Feb 2013

Keywords

  • Block tridiagonal matrix
  • Cyclic reduction
  • Parallel solver
  • Prefix computation

Fingerprint

Dive into the research topics of 'Revisiting parallel cyclic reduction and parallel prefix-based algorithms for block tridiagonal systems of equations'. Together they form a unique fingerprint.

Cite this