TY - GEN
T1 - Coupling exascale multiphysics applications
T2 - 14th IEEE International Conference on eScience, e-Science 2018
AU - Choi, Jong Youl
AU - Chang, Choong Seock
AU - Dominski, Julien
AU - Klasky, Scott
AU - Merlo, Gabriele
AU - Suchyta, Eric
AU - Ainsworth, Mark
AU - Allen, Bryce
AU - Cappello, Franck
AU - Churchill, Michael
AU - Davis, Philip
AU - Di, Sheng
AU - Eisenhauer, Greg
AU - Ethier, Stephane
AU - Foster, Ian
AU - Geveci, Berk
AU - Guo, Hanqi
AU - Huck, Kevin
AU - Jenko, Frank
AU - Kim, Mark
AU - Kress, James
AU - Ku, Seung Hoe
AU - Liu, Qing
AU - Logan, Jeremy
AU - Malony, Allen
AU - Mehta, Kshitij
AU - Moreland, Kenneth
AU - Munson, Todd
AU - Parashar, Manish
AU - Peterka, Tom
AU - Podhorszki, Norbert
AU - Pugmire, Dave
AU - Tugluk, Ozan
AU - Wang, Ruonan
AU - Whitney, Ben
AU - Wolf, Matthew
AU - Wood, Chad
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/12/24
Y1 - 2018/12/24
N2 - With the growing computational complexity of science and the complexity of new and emerging hardware, it is time to re-evaluate the traditional monolithic design of computational codes. One new paradigm is constructing larger scientific computational experiments from the coupling of multiple individual scientific applications, each targeting their own physics, characteristic lengths, and/or scales. We present a framework constructed by leveraging capabilities such as in-memory communications, workflow scheduling on HPC resources, and continuous performance monitoring. This code coupling capability is demonstrated by a fusion science scenario, where differences between the plasma at the edges and at the core of a device have different physical descriptions. This infrastructure not only enables the coupling of the physics components, but it also connects in situ or online analysis, compression, and visualization that accelerate the time between a run and the analysis of the science content. Results from runs on Titan and Cori are presented as a demonstration.
AB - With the growing computational complexity of science and the complexity of new and emerging hardware, it is time to re-evaluate the traditional monolithic design of computational codes. One new paradigm is constructing larger scientific computational experiments from the coupling of multiple individual scientific applications, each targeting their own physics, characteristic lengths, and/or scales. We present a framework constructed by leveraging capabilities such as in-memory communications, workflow scheduling on HPC resources, and continuous performance monitoring. This code coupling capability is demonstrated by a fusion science scenario, where differences between the plasma at the edges and at the core of a device have different physical descriptions. This infrastructure not only enables the coupling of the physics components, but it also connects in situ or online analysis, compression, and visualization that accelerate the time between a run and the analysis of the science content. Results from runs on Titan and Cori are presented as a demonstration.
KW - Coupling
KW - In situ analysis
KW - Staging
UR - http://www.scopus.com/inward/record.url?scp=85061372048&partnerID=8YFLogxK
U2 - 10.1109/eScience.2018.00133
DO - 10.1109/eScience.2018.00133
M3 - Conference contribution
AN - SCOPUS:85061372048
T3 - Proceedings - IEEE 14th International Conference on eScience, e-Science 2018
SP - 442
EP - 452
BT - Proceedings - IEEE 14th International Conference on eScience, e-Science 2018
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 29 October 2018 through 1 November 2018
ER -