EFFICIENT MAPPING BETWEEN VOID SHAPES AND STRESS FIELDS USING DEEP CONVOLUTIONAL NEURAL NETWORKS WITH SPARSE DATA

Anindya Bhaduri, Nesar Ramachandra, Sandipp Krishnan Ravi, Lele Luan, Piyush Pandita, Prasanna Balaprakash, Mihai Anitescu, Changjie Sun, Liping Wang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Establishing fast and accurate structure-to-property relationships is an important component in the design and discovery of materials. Physics-based simulation models like the finite element method (FEM) are often used to predict deformation, stress, and strain fields as a function of material microstructure in material and structural systems. Such models may be computationally expensive and time intensive if the underlying physics of the system is complex. This limits their application to solve inverse design problems and identify structures that maximizes performance. In such scenarios, surrogate models are employed to make the forward mapping efficient. Still, the high dimensionality of the input microstructure and the output field of interest may render them inefficient, especially when dealing with sparse data. Deep convolutional neural network (CNN) based surrogate models have been typically found to be very useful in handling such high-dimensional problems. In this paper, the system under study is a single ellipsoidal void structure under a uniaxial tensile load represented by a linear elastic FEM model. We consider two deep CNN architectures, a modified Convolutional Autoencoder (CAE) framework with a fully connected bottleneck and a UNet Convolutional Neural Network (CNN), and compare their accuracy in predicting the Von Mises stress field for any given input void shape in the FEM model. A sensitivity analysis is also performed using the two methods, where the variation in the prediction accuracy on unseen test data is studied with the increasing number of training samples from 20 to 100.

Original languageEnglish
Title of host publication43rd Computers and Information in Engineering Conference (CIE)
PublisherAmerican Society of Mechanical Engineers (ASME)
ISBN (Electronic)9780791887295
DOIs
StatePublished - 2023
EventASME 2023 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, IDETC-CIE 2023 - Boston, United States
Duration: Aug 20 2023Aug 23 2023

Publication series

NameProceedings of the ASME Design Engineering Technical Conference
Volume2

Conference

ConferenceASME 2023 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, IDETC-CIE 2023
Country/TerritoryUnited States
CityBoston
Period08/20/2308/23/23

Funding

This material is based upon work supported by the U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy (EERE) under the Advanced Manufacturing Office, Award Number DE-AC0206H11357. The views expressed herein do not necessarily represent the views of the U.S. Department of Energy or the United States Government. Work at Argonne National Laboratory was supported by the U.S. Department of Energy, Office of High Energy Physics. Argonne, a U.S. Department of Energy Office of Science Laboratory, is operated by UChicago Argonne LLC under contract no. DEAC02-06CH11357. This manuscript has been authored by UTBattelle, LLC under Contract No. DE-AC05-00OR22725 with the US Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/ doe-public-access-plan). Part of the analysis here is carried out on Swing, a GPU system at the Laboratory Computing Resource Center (LCRC) of Argonne National Laboratory. We would also like to thank Dr. Aymeric Moinet, at General Electric Research, for his insights into the ellipsoidal void problem. This material is based upon work supported by the U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy (EERE) under the Advanced Manufacturing Office, Award Number DE-AC0206H11357. The views expressed herein do not necessarily represent the views of the U.S. Department of Energy or the United States Government. Work at Argonne National Laboratory was supported by the U.S. Department of Energy, Office of High Energy Physics. Argonne, a U.S. Department of Energy Office of Science Laboratory, is operated by UChicago Argonne LLC under contract no. DE-AC02-06CH11357. This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the US Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/ doe-public-access-plan). Part of the analysis here is carried out on Swing, a GPU system at the Laboratory Computing Resource Center (LCRC) of Argonne National Laboratory. We would also like to thank Dr. Aymeric Moinet, at General Electric Research, for his insights into the ellipsoidal void problem.

Keywords

  • convolutional neural networks
  • deep learning
  • sparse data
  • stress field
  • surrogate modeling
  • void geometry

Fingerprint

Dive into the research topics of 'EFFICIENT MAPPING BETWEEN VOID SHAPES AND STRESS FIELDS USING DEEP CONVOLUTIONAL NEURAL NETWORKS WITH SPARSE DATA'. Together they form a unique fingerprint.

Cite this