TY - JOUR
T1 - Big data and extreme-scale computing
T2 - Pathways to Convergence-Toward a shaping strategy for a future software and data ecosystem for scientific inquiry
AU - Asch, M.
AU - Moore, T.
AU - Badia, R.
AU - Beck, M.
AU - Beckman, P.
AU - Bidot, T.
AU - Bodin, F.
AU - Cappello, F.
AU - Choudhary, A.
AU - de Supinski, B.
AU - Deelman, E.
AU - Dongarra, J.
AU - Dubey, A.
AU - Fox, G.
AU - Fu, H.
AU - Girona, S.
AU - Gropp, W.
AU - Heroux, M.
AU - Ishikawa, Y.
AU - Keahey, K.
AU - Keyes, D.
AU - Kramer, W.
AU - Lavignon, J. F.
AU - Lu, Y.
AU - Matsuoka, S.
AU - Mohr, B.
AU - Reed, D.
AU - Requena, S.
AU - Saltz, J.
AU - Schulthess, T.
AU - Stevens, R.
AU - Swany, M.
AU - Szalay, A.
AU - Tang, W.
AU - Varoquaux, G.
AU - Vilotte, J. P.
AU - Wisniewski, R.
AU - Xu, Z.
AU - Zacharov, I.
N1 - Publisher Copyright:
© The Author(s) 2018.
PY - 2018/7/1
Y1 - 2018/7/1
N2 - Over the past four years, the Big Data and Exascale Computing (BDEC) project organized a series of five international workshops that aimed to explore the ways in which the new forms of data-centric discovery introduced by the ongoing revolution in high-end data analysis (HDA) might be integrated with the established, simulation-centric paradigm of the high-performance computing (HPC) community. Based on those meetings, we argue that the rapid proliferation of digital data generators, the unprecedented growth in the volume and diversity of the data they generate, and the intense evolution of the methods for analyzing and using that data are radically reshaping the landscape of scientific computing. The most critical problems involve the logistics of wide-area, multistage workflows that will move back and forth across the computing continuum, between the multitude of distributed sensors, instruments and other devices at the networks edge, and the centralized resources of commercial clouds and HPC centers. We suggest that the prospects for the future integration of technological infrastructures and research ecosystems need to be considered at three different levels. First, we discuss the convergence of research applications and workflows that establish a research paradigm that combines both HPC and HDA, where ongoing progress is already motivating efforts at the other two levels. Second, we offer an account of some of the problems involved with creating a converged infrastructure for peripheral environments, that is, a shared infrastructure that can be deployed throughout the network in a scalable manner to meet the highly diverse requirements for processing, communication, and buffering/storage of massive data workflows of many different scientific domains. Third, we focus on some opportunities for software ecosystem convergence in big, logically centralized facilities that execute large-scale simulations and models and/or perform large-scale data analytics. We close by offering some conclusions and recommendations for future investment and policy review.
AB - Over the past four years, the Big Data and Exascale Computing (BDEC) project organized a series of five international workshops that aimed to explore the ways in which the new forms of data-centric discovery introduced by the ongoing revolution in high-end data analysis (HDA) might be integrated with the established, simulation-centric paradigm of the high-performance computing (HPC) community. Based on those meetings, we argue that the rapid proliferation of digital data generators, the unprecedented growth in the volume and diversity of the data they generate, and the intense evolution of the methods for analyzing and using that data are radically reshaping the landscape of scientific computing. The most critical problems involve the logistics of wide-area, multistage workflows that will move back and forth across the computing continuum, between the multitude of distributed sensors, instruments and other devices at the networks edge, and the centralized resources of commercial clouds and HPC centers. We suggest that the prospects for the future integration of technological infrastructures and research ecosystems need to be considered at three different levels. First, we discuss the convergence of research applications and workflows that establish a research paradigm that combines both HPC and HDA, where ongoing progress is already motivating efforts at the other two levels. Second, we offer an account of some of the problems involved with creating a converged infrastructure for peripheral environments, that is, a shared infrastructure that can be deployed throughout the network in a scalable manner to meet the highly diverse requirements for processing, communication, and buffering/storage of massive data workflows of many different scientific domains. Third, we focus on some opportunities for software ecosystem convergence in big, logically centralized facilities that execute large-scale simulations and models and/or perform large-scale data analytics. We close by offering some conclusions and recommendations for future investment and policy review.
KW - Big data
KW - extreme-scale computing
KW - future software
KW - high-end data analysis
KW - traditional HPC
UR - http://www.scopus.com/inward/record.url?scp=85050197185&partnerID=8YFLogxK
U2 - 10.1177/1094342018778123
DO - 10.1177/1094342018778123
M3 - Review article
AN - SCOPUS:85050197185
SN - 1094-3420
VL - 32
SP - 435
EP - 479
JO - International Journal of High Performance Computing Applications
JF - International Journal of High Performance Computing Applications
IS - 4
ER -