TY - GEN
T1 - Deep Learning for Void Detection in Composite Oriented Strand Board
AU - Hu, Wenyue
AU - Wang, Xiaoxing
AU - Bowland, Christopher C.
AU - Nguyen, Phu
AU - Li, Carina Xiaochen
AU - Nutt, Steven
AU - Jin, Bo
N1 - Publisher Copyright:
© 2022. Used by CAMX - The Composites and Advanced Materials Expo. CAMX Conference Proceedings. Anaheim, CA, October 17-20, 2022. CAMX - The Composites and Advanced Materials Expo.
PY - 2022
Y1 - 2022
N2 - X-ray micro-computed tomography (micro-CT) offers the ability to assess and quantify microstructural characteristics in general, and the distribution of voids in particular, in composite laminates. Despite these capabilities, analysis of micro-CT data generally requires extensive training and human input regarding detection, number counting, morphology analysis, and predictions of mechanical performance. In recent years, advances in Deep Learning (DL) have been applied to challenging tasks of image processing, such as automatic detection and analysis of features in images, and rapid classifications of features with minimal human intervention/oversight. The application of DL simplifies the pipeline of tomography analysis by automating void detection and segregation, providing an accessible path to in-depth studies of porosity evolution. This study describes an automated void segmentation solution built within commercial software (MATLAB) and applied to cross-sectional scans of Composite Oriented Strand Board (COSB). By training labeled data produced by grayscale binary masking, three of the most representative neural networks are assessed, based on the respective performance and accuracy in void detection. A Fully Convolutional Network (FCN, a neural network) performed semantic segmentation at the pixel level. A modified FCN, SegNet, was created by making the encoder-decoder structure symmetrical. The third FCN, U-net (used in biomedical image segmentation), was thought to be the state-of-art segmentation solution. Compared with the manual-labeled dataset, FCN, which yields the most accurate results statistically and successfully incorporates boundary-aware segmentation, outperformed the other two networks. Furthermore, FCN could be combined with pre-processing binary masking to develop an autonomous annotation tool for void-content study. SegNet was intended for scene understanding and therefore falsely occupied a larger area than the labeled ground truth voids, while U-net exhibited limitations in continuous boundary depiction when irregularities and perturbations were engaged.
AB - X-ray micro-computed tomography (micro-CT) offers the ability to assess and quantify microstructural characteristics in general, and the distribution of voids in particular, in composite laminates. Despite these capabilities, analysis of micro-CT data generally requires extensive training and human input regarding detection, number counting, morphology analysis, and predictions of mechanical performance. In recent years, advances in Deep Learning (DL) have been applied to challenging tasks of image processing, such as automatic detection and analysis of features in images, and rapid classifications of features with minimal human intervention/oversight. The application of DL simplifies the pipeline of tomography analysis by automating void detection and segregation, providing an accessible path to in-depth studies of porosity evolution. This study describes an automated void segmentation solution built within commercial software (MATLAB) and applied to cross-sectional scans of Composite Oriented Strand Board (COSB). By training labeled data produced by grayscale binary masking, three of the most representative neural networks are assessed, based on the respective performance and accuracy in void detection. A Fully Convolutional Network (FCN, a neural network) performed semantic segmentation at the pixel level. A modified FCN, SegNet, was created by making the encoder-decoder structure symmetrical. The third FCN, U-net (used in biomedical image segmentation), was thought to be the state-of-art segmentation solution. Compared with the manual-labeled dataset, FCN, which yields the most accurate results statistically and successfully incorporates boundary-aware segmentation, outperformed the other two networks. Furthermore, FCN could be combined with pre-processing binary masking to develop an autonomous annotation tool for void-content study. SegNet was intended for scene understanding and therefore falsely occupied a larger area than the labeled ground truth voids, while U-net exhibited limitations in continuous boundary depiction when irregularities and perturbations were engaged.
UR - http://www.scopus.com/inward/record.url?scp=85159476254&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85159476254
T3 - Composites and Advanced Materials Expo, CAMX 2022
BT - Composites and Advanced Materials Expo, CAMX 2022
PB - The Composites and Advanced Materials Expo (CAMX)
T2 - 2022 Annual Composites and Advanced Materials Expo, CAMX 2022
Y2 - 17 October 2020 through 20 October 2020
ER -