Abstract
Direct solvers based on prefix computation and cyclic reduction algorithms exploit the special structure of tridiagonal systems of equations to deliver better parallel performance compared to those designed for more general systems of equations. This performance advantage is even more pronounced for block tridiagonal systems. In this paper, we re-examine the performances of these two algorithms taking the effects of block size into account. Depending on the block size, the parameter space spanned by the number of block rows, size of the blocks and the processor count is shown to favor one or the other of the two algorithms. A critical block size that separates these two regions is shown to emerge and its dependence both on problem dependent parameters and on machine-specific constants is established. Empirical verification of these analytical findings is carried out on up to 2048 cores of a Cray XT4 system.
Original language | English |
---|---|
Pages (from-to) | 273-280 |
Number of pages | 8 |
Journal | Journal of Parallel and Distributed Computing |
Volume | 73 |
Issue number | 2 |
DOIs | |
State | Published - Feb 2013 |
Keywords
- Block tridiagonal matrix
- Cyclic reduction
- Parallel solver
- Prefix computation