Abstract
In this paper, we analyze the potential of asynchronous relaxation methods on Graphics Processing Units (GPUs). We develop asynchronous iteration algorithms in CUDA and compare them with parallel implementations of synchronous relaxation methods on CPU- or GPU-based systems. For a set of test matrices from UFMC we investigate convergence behavior, performance and tolerance to hardware failure. We observe that even for our most basic asynchronous relaxation scheme, the method can efficiently leverage the GPUs computing power and is, despite its lower convergence rate compared to the Gauss-Seidel relaxation, still able to provide solution approximations of certain accuracy in considerably shorter time than Gauss-Seidel running on CPUs- or GPU-based Jacobi. Hence, it overcompensates for the slower convergence by exploiting the scalability and the good fit of the asynchronous schemes for the highly parallel GPU architectures. Further, enhancing the most basic asynchronous approach with hybrid schemes-using multiple iterations within the "subdomain" handled by a GPU thread block-we manage to not only recover the loss of global convergence but often accelerate convergence of up to two times, while keeping the execution time of a global iteration practically the same. The combination with the advantageous properties of asynchronous iteration methods with respect to hardware failure identifies the high potential of the asynchronous methods for Exascale computing.
Original language | English |
---|---|
Pages (from-to) | 1613-1626 |
Number of pages | 14 |
Journal | Journal of Parallel and Distributed Computing |
Volume | 73 |
Issue number | 12 |
DOIs | |
State | Published - 2013 |
Funding
The authors would like to thank the National Science Foundation, the Department of Energy, NVIDIA , and the MathWorks for supporting this research effort. We would also like to thank the Karlsruhe House of Young Scientists (KHYS) for supporting the research cooperation.
Keywords
- Asynchronous relaxation
- Chaotic iteration
- Graphics processing units (GPUs)
- Jacobi method