Towards continuous benchmarking: An automated performance evaluation framework for high performance software

Hartwig Anzt, Yen Chen Chen, Terry Cojean, Jack Dongarra, Goran Flegar, Pratik Nayak, Enrique S. Quintana-Ortí, Yuhsiang M. Tsai, Weichung Wang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

6 Scopus citations

Abstract

We present an automated performance evaluation framework that enables an automated workflow for testing and performance evaluation of software libraries. Integrating this component into an ecosystem enables sustainable software development, as a community effort, via a web application for interactively evaluating the performance of individual software components. The performance evaluation tool is based exclusively on web technologies, which removes the burden of downloading performance data or installing additional software. We employ this framework for the Ginkgo software ecosystem, but the framework can be used with essentially any software project, including the comparison between different software libraries. The Continuous Integration (CI) framework of Ginkgo is also extended to automatically run a benchmark suite on predetermined HPC systems, store the state of the machine and the environment along with the compiled binaries, and collect results in a publicly accessible performance data repository based on Git. The Ginkgo performance explorer (GPE) can be used to retrieve the performance data from the repository, and visualizes it in a web browser. GPE also implements an interface that allows users to write scripts, archived in a Git repository, to extract particular data, compute particular metrics, and visualize them in many different formats (as specified by the script). The combination of these approaches creates a workflow which enables performance reproducibility and software sustainability of scientific software. In this paper, we present example scripts that extract and visualize performance data for Ginkgo’s SpMV kernels that allow users to identify the optimal kernel for specific problem characteristics.

Original languageEnglish
Title of host publicationProceedings of the Platform for Advanced Scientific Computing Conference, PASC 2019
PublisherAssociation for Computing Machinery, Inc
ISBN (Electronic)9781450367707
DOIs
StatePublished - Jun 12 2019
Event6th Platform for Advanced Scientific Computing Conference, PASC 2019 - Zurich, Switzerland
Duration: Jun 12 2019Jun 14 2019

Publication series

NameProceedings of the Platform for Advanced Scientific Computing Conference, PASC 2019

Conference

Conference6th Platform for Advanced Scientific Computing Conference, PASC 2019
Country/TerritorySwitzerland
CityZurich
Period06/12/1906/14/19

Funding

This research was supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration, under prime contract #DE-AC05-00OR22725, and UT Battelle subaward #4000152412. H. Anzt, T. Cojean, and P. Nayak were supported by the “Impuls und Vernetzungsfond” of the Helmholtz Association under grant VH-NG-1241.

FundersFunder number
National Nuclear Security Administration4000152412, DE-AC05-00OR22725

    Keywords

    • Automated performance benchmarking
    • Continuous integration
    • Healthy software lifecycle
    • Interactive performance visualization

    Fingerprint

    Dive into the research topics of 'Towards continuous benchmarking: An automated performance evaluation framework for high performance software'. Together they form a unique fingerprint.

    Cite this