Preconditioners for Batched Iterative Linear Solvers on GPUs

Isha Aggarwal, Pratik Nayak, Aditya Kashi, Hartwig Anzt

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Batched iterative solvers can be an attractive alternative to batched direct solvers if the linear systems allow for fast convergence. In non-batched settings, iterative solvers are often enhanced with sophisticated preconditioners to improve convergence. In this paper, we develop preconditioners for batched iterative solvers that improve the iterative solver convergence without incurring detrimental resource overhead and preserving much of the iterative solver flexibility. We detail the design and implementation considerations, present a user-friendly interface to the batched preconditioners, and demonstrate the convergence and runtime benefits over non-preconditioned batched iterative solvers on state-of-the-art GPUs for a variety of benchmark problems from finite difference stencil matrices, the Suitesparse matrix collection and a computational chemistry application.

Original languageEnglish
Title of host publicationAccelerating Science and Engineering Discoveries Through Integrated Research Infrastructure for Experiment, Big Data, Modeling and Simulation - 22nd Smoky Mountains Computational Sciences and Engineering Conference, SMC 2022, Revised Selected Papers
EditorsKothe Doug, Geist Al, Swaroop Pophale, Hong Liu, Suzanne Parete-Koon
PublisherSpringer Science and Business Media Deutschland GmbH
Pages38-53
Number of pages16
ISBN (Print)9783031236051
DOIs
StatePublished - 2022
Externally publishedYes
EventSmoky Mountains Computational Sciences and Engineering Conference, SMC 2022 - Virtual, Online
Duration: Aug 24 2022Aug 25 2022

Publication series

NameCommunications in Computer and Information Science
Volume1690 CCIS
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Conference

ConferenceSmoky Mountains Computational Sciences and Engineering Conference, SMC 2022
CityVirtual, Online
Period08/24/2208/25/22

Funding

2 This work was performed on the HoreKa supercomputer funded by the Ministry of Science, Research and the Arts Baden-Württemberg and by the Federal Ministry of Education and Research. This research was supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration.

FundersFunder number
Office of Science
National Nuclear Security Administration
Bundesministerium für Bildung und Forschung
Ministerium für Wissenschaft, Forschung und Kunst Baden-Württemberg

    Keywords

    • Batched preconditioners
    • Batched solvers
    • GPU
    • Ginkgo
    • Sparse linear systems

    Fingerprint

    Dive into the research topics of 'Preconditioners for Batched Iterative Linear Solvers on GPUs'. Together they form a unique fingerprint.

    Cite this