TY - GEN
T1 - Accelerating Scientific Simulations with Bi-Fidelity Weighted Transfer Learning
AU - Borowiec, Katarzyna
AU - Lu, Dan
AU - Chandan, Vikas
AU - Chatterjee, Samrat
AU - Ramuhalli, Pradeep
AU - Tipireddy, Ramakrishna
AU - Halappanavar, Mahantesh
AU - Liu, Frank
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - High-fidelity modeling is an essential design tool for many engineering applications. However, for complex systems, computational cost can be a limiting factor. Analyzing parameter sensitivity, uncertainty quantification, and design optimization require many model evaluations. Surrogate models are often used to develop the relationship between model parameters and quantities of interest. However, in the case of complex systems, surrogate models require several degrees of freedom and, thus, a large number of data points to determine the correct dependencies. For many applications, this may be prohibitively expensive. The reduction of computational requirements can be achieved by leveraging low-fidelity models. Low-fidelity models represent the system at a coarser resolution with the advantage of computational efficiency. Therefore, a bi-fidelity modeling paradigm, which augments the accuracy of a low-fidelity model in a computationally efficient manner by invoking limited runs of a high-fidelity model, can be leveraged to sufficiently balance the accuracy and computational requirements. In this work, a bi-fidelity weighted transfer learning method using neural networks was applied to a computational fluid dynamics heat transfer modeling problem. The transfer learning advantage was investigated as a function of hyperparameters. Our main finding is that the use of a bi-fidelity modeling paradigm achieves accuracy close to that of a high-fidelity Gaussian process model while significantly reducing computational cost.
AB - High-fidelity modeling is an essential design tool for many engineering applications. However, for complex systems, computational cost can be a limiting factor. Analyzing parameter sensitivity, uncertainty quantification, and design optimization require many model evaluations. Surrogate models are often used to develop the relationship between model parameters and quantities of interest. However, in the case of complex systems, surrogate models require several degrees of freedom and, thus, a large number of data points to determine the correct dependencies. For many applications, this may be prohibitively expensive. The reduction of computational requirements can be achieved by leveraging low-fidelity models. Low-fidelity models represent the system at a coarser resolution with the advantage of computational efficiency. Therefore, a bi-fidelity modeling paradigm, which augments the accuracy of a low-fidelity model in a computationally efficient manner by invoking limited runs of a high-fidelity model, can be leveraged to sufficiently balance the accuracy and computational requirements. In this work, a bi-fidelity weighted transfer learning method using neural networks was applied to a computational fluid dynamics heat transfer modeling problem. The transfer learning advantage was investigated as a function of hyperparameters. Our main finding is that the use of a bi-fidelity modeling paradigm achieves accuracy close to that of a high-fidelity Gaussian process model while significantly reducing computational cost.
KW - bi-fidelity modeling
KW - surrogate modeling
KW - transfer learning
UR - http://www.scopus.com/inward/record.url?scp=85190095735&partnerID=8YFLogxK
U2 - 10.1109/ICMLA58977.2023.00147
DO - 10.1109/ICMLA58977.2023.00147
M3 - Conference contribution
AN - SCOPUS:85190095735
T3 - Proceedings - 22nd IEEE International Conference on Machine Learning and Applications, ICMLA 2023
SP - 994
EP - 999
BT - Proceedings - 22nd IEEE International Conference on Machine Learning and Applications, ICMLA 2023
A2 - Arif Wani, M.
A2 - Boicu, Mihai
A2 - Sayed-Mouchaweh, Moamar
A2 - Abreu, Pedro Henriques
A2 - Gama, Joao
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 22nd IEEE International Conference on Machine Learning and Applications, ICMLA 2023
Y2 - 15 December 2023 through 17 December 2023
ER -