Extreme scaling of production visualization software on diverse architectures

Hank Childs, David Pugmire, Sean Ahern, Brad Whitlock, M. Howison, Prabhat, Gunther H. Weber, E. Wes Bethel

Research output: Contribution to journalArticlepeer-review

73 Scopus citations

Abstract

A series of experiments studied how visualization software scales to massive data sets. Although several paradigms exist for processing large data, the experiments focused on pure parallelism, the dominant approach for production software. The experiments used multiple visualization algorithms and ran on multiple architectures. They focused on massive-scale processing (16,000 or more cores and one trillion or more cells) and weak scaling. These experiments employed the largest data set sizes published to date in the visualization literature. The findings on scaling characteristics and bottlenecks will help researchers understand how pure parallelism performs at high levels of concurrency with very large data sets.

Original languageEnglish
Article number10
Pages (from-to)22-31
Number of pages10
JournalUnknown Journal
Volume30
Issue number3
DOIs
StatePublished - 2010

Funding

This work was supported by the Director, Office of Advanced Scientific Computing Research, Office of Science, of the US Department of Energy (DOE) under contract DE-AC02-05CH11231 through the Scientific Discovery through Advanced Computing program’s Visualization and Analytics Center for Enabling Technologies. We thank Mark Miller for status update improvements and the anonymous reviewers, whose suggestions greatly improved this article. The following resources contributed to our research results: the National Energy Research Scientific Computing Center (NERSC), which is supported by the US DOE Office of Science under contract DE-AC02-05CH11231; the Livermore Computing Center at Lawrence Livermore National Laboratory (LLNL), which is supported by the US DOE National Nuclear Security Administration under contract DE-AC52-07NA27344; the Center for Computational Sciences at Oak Ridge National Laboratory (ORNL), which is supported by the US DOE Office of Science under contract De-AC05-00OR22725; and the Texas Advanced Computing Center (TACC) at the University of Texas at Austin, which provided HPC resources. We thank the personnel at the computing centers that helped us perform our runs, specifically Katie Antypas, Kathy Yelick, Francesca Verdier, and Howard Walter of NERSC; Paul Navratil, Kelly Gaither, and Karl Schulz of TACC; James Hack, Doug Kothe, Arthur Bland, and Ricky Kendall of ORNL’s Leadership Computing Facility; and David Fox, Debbie Santa Maria, and Brian Carnes of LLNL’s Livermore Computing.

FundersFunder number
Center for Computational Sciences
Texas Advanced Computing Center
DOE Office of Science
US Department of Energy
U.S. Department of EnergyDE-AC02-05CH11231
Office of Science
National Nuclear Security AdministrationDE-AC52-07NA27344
Advanced Scientific Computing Research
Oak Ridge National LaboratoryDe-AC05-00OR22725
University of Texas at Austin

    Keywords

    • Computer graphics
    • Dawn
    • Denovo
    • Graphics and multimedia
    • I/O performance
    • Interprocess communication
    • Many-core processing
    • Petascale computing
    • Pure parallelism
    • Very large data sets
    • VisIt
    • Visualization

    Fingerprint

    Dive into the research topics of 'Extreme scaling of production visualization software on diverse architectures'. Together they form a unique fingerprint.

    Cite this