Multi-organ segmentation in clinical-computed tomography for patient-specific image quality and dose metrology

Wanyi Fu, Shobhit Sharma, Taylor Smith, Rui Hou, Ehsan Abadi, Vignesh Selvakumaran, Ruixiang Tang, Joseph Y. Lo, W. Paul Segars, Anuj J. Kapadia, Justin B. Solomon, Geoffrey D. Rubin, Ehsan Samei

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

7 Scopus citations

Abstract

The purpose of this study was to develop a robust, automated multi-organ segmentation model for clinical adult and pediatric CT and implement the model as part of a patient-specific safety and quality monitoring system. 3D convolutional neural network (Unet) models were setup to segment 30 different organs and structures at the diagnostic image resolution. For each organ, 200 manually-labeled cases were used to train the network, fitting it to different clinical imaging resolutions and contrast enhancement stages. The dataset was randomly shuffled, and divided with 6/2/2 train/validation/test set split. The model was deployed to automatically segment 1200 clinical CT images as a demonstration of the utility of the method. Each case was made into a patient-specific phantom based on the segmentation masks, with unsegmented organs and structures filled in by deforming a template XCAT phantom of similar anatomy. The organ doses were then estimated using a validated scanner-specific MC-GPU package using the actual scan information. The segmented organ information was likewise used to assess contrast, noise, and detectability index within each organ. The neural network segmentation model showed dice similarity coefficients (DSC) above 0.85 for the majority of organs. Notably, the lungs and liver showed a DSC of 0.95 and 0.94, respectively. The segmentation results produced patient-specific dose and quality values across the tested 1200 patients with representative the histogram distributions. The measurements were compared in global-to-organ (e.g. CTDvol vs. liver dose) and organ-to-organ (e.g. liver dose vs. spleen dose) manner. The global-to-organ measurements (liver dose vs. CTDIvol: o-'. = 0.62; liver vs. global d': o'. = 0.78; liver vs. global noise: o'. = 0.55) were less correlated compared to the organ-to-organ measurements (liver vs. spleen dose: o'. = 0.93; liver vs. spleen d': o'. = 0.82; liver vs. spleen noise: o'. = 0.78). This variation of measurement is more prominent for iterative reconstruction kernel compared to the filtered back projection kernel (liver vs. global noise: o'.o4o'. = 0.47 vs. o'.oouo F = 0.75; liver vs. global d': o'.o1/4o'. = 0.74 vs. o'.oouo'F = 0.83). The results can help derive meaningful relationships between image quality, organ doses, and patient attributes.

Original languageEnglish
Title of host publicationMedical Imaging 2019
Subtitle of host publicationPhysics of Medical Imaging
EditorsTaly Gilat Schmidt, Guang-Hong Chen, Hilde Bosmans
PublisherSPIE
ISBN (Electronic)9781510625433
DOIs
StatePublished - 2019
Externally publishedYes
EventMedical Imaging 2019: Physics of Medical Imaging - San Diego, United States
Duration: Feb 17 2019Feb 20 2019

Publication series

NameProgress in Biomedical Optics and Imaging - Proceedings of SPIE
Volume10948
ISSN (Print)1605-7422

Conference

ConferenceMedical Imaging 2019: Physics of Medical Imaging
Country/TerritoryUnited States
CitySan Diego
Period02/17/1902/20/19

Funding

The work was supported in part by the National Institutes of Health (Grant No. R01EB001838). The authors gratefully thank Ms. Jocelyn Hoye and Mr. Brian Harrawood for their valuable discussions and help.

FundersFunder number
National Institutes of Health

    Keywords

    • Computational phantom
    • Convolutional neural networks
    • Detectability index
    • Image quality
    • Monte Carlo
    • Organ dose
    • Organ segmentation
    • Patient-specific

    Fingerprint

    Dive into the research topics of 'Multi-organ segmentation in clinical-computed tomography for patient-specific image quality and dose metrology'. Together they form a unique fingerprint.

    Cite this