Abstract
A series of experiments studied how visualization software scales to massive data sets. Although several paradigms exist for processing large data, the experiments focused on pure parallelism, the dominant approach for production software. The experiments used multiple visualization algorithms and ran on multiple architectures. They focused on massive-scale processing (16,000 or more cores and one trillion or more cells) and weak scaling. These experiments employed the largest data set sizes published to date in the visualization literature. The findings on scaling characteristics and bottlenecks will help researchers understand how pure parallelism performs at high levels of concurrency with very large data sets.
Original language | English |
---|---|
Article number | 10 |
Pages (from-to) | 22-31 |
Number of pages | 10 |
Journal | Unknown Journal |
Volume | 30 |
Issue number | 3 |
DOIs | |
State | Published - 2010 |
Funding
This work was supported by the Director, Office of Advanced Scientific Computing Research, Office of Science, of the US Department of Energy (DOE) under contract DE-AC02-05CH11231 through the Scientific Discovery through Advanced Computing program’s Visualization and Analytics Center for Enabling Technologies. We thank Mark Miller for status update improvements and the anonymous reviewers, whose suggestions greatly improved this article. The following resources contributed to our research results: the National Energy Research Scientific Computing Center (NERSC), which is supported by the US DOE Office of Science under contract DE-AC02-05CH11231; the Livermore Computing Center at Lawrence Livermore National Laboratory (LLNL), which is supported by the US DOE National Nuclear Security Administration under contract DE-AC52-07NA27344; the Center for Computational Sciences at Oak Ridge National Laboratory (ORNL), which is supported by the US DOE Office of Science under contract De-AC05-00OR22725; and the Texas Advanced Computing Center (TACC) at the University of Texas at Austin, which provided HPC resources. We thank the personnel at the computing centers that helped us perform our runs, specifically Katie Antypas, Kathy Yelick, Francesca Verdier, and Howard Walter of NERSC; Paul Navratil, Kelly Gaither, and Karl Schulz of TACC; James Hack, Doug Kothe, Arthur Bland, and Ricky Kendall of ORNL’s Leadership Computing Facility; and David Fox, Debbie Santa Maria, and Brian Carnes of LLNL’s Livermore Computing.
Funders | Funder number |
---|---|
Center for Computational Sciences | |
Texas Advanced Computing Center | |
DOE Office of Science | |
US Department of Energy | |
U.S. Department of Energy | DE-AC02-05CH11231 |
Office of Science | |
National Nuclear Security Administration | DE-AC52-07NA27344 |
Advanced Scientific Computing Research | |
Oak Ridge National Laboratory | De-AC05-00OR22725 |
University of Texas at Austin |
Keywords
- Computer graphics
- Dawn
- Denovo
- Graphics and multimedia
- I/O performance
- Interprocess communication
- Many-core processing
- Petascale computing
- Pure parallelism
- Very large data sets
- VisIt
- Visualization