TY - GEN
T1 - A two-tier convolutional neural network for combined detection and segmentation in biological imagery
AU - Ziabari, Amirkoushyar
AU - Shirinifard, Abbas
AU - Eicholtz, Matthew R.
AU - Solecki, David J.
AU - Rose, Derek C.
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/11
Y1 - 2019/11
N2 - Deep learning techniques have been useful for modern microscopy imaging techniques to further study and analyze biological structures and organs. Convolutional neural networks (CNN) have improved 2D object detection, localization, and segmentation. For imagery containing biological structures with depth, it is especially desirable to perform these tasks in 3D. Traditionally, performing these tasks simultaneously in 3D has proven to be computationally expensive. Currently available methodologies thus largely work to segment 3D objects from 2D images (without context from captured 3D volumes). In this work, we present a novel approach to perform fast and accurate localization, detection, and segmentation of volumes containing cells. Specifically, in our method, we modify and tune two state-of-the-art CNNs, namely 2D YOLOv2 and 3D U-Net, and combine them with a new fusion and image processing algorithms. Annotated volumes in this space are limited, and we have created synthetic data that mimics actual structures for training and testing our proposed approach. Promising results on this test data demonstrate the value of the technique and offers a methodology for 3D cell analysis in real microscopy imagery.
AB - Deep learning techniques have been useful for modern microscopy imaging techniques to further study and analyze biological structures and organs. Convolutional neural networks (CNN) have improved 2D object detection, localization, and segmentation. For imagery containing biological structures with depth, it is especially desirable to perform these tasks in 3D. Traditionally, performing these tasks simultaneously in 3D has proven to be computationally expensive. Currently available methodologies thus largely work to segment 3D objects from 2D images (without context from captured 3D volumes). In this work, we present a novel approach to perform fast and accurate localization, detection, and segmentation of volumes containing cells. Specifically, in our method, we modify and tune two state-of-the-art CNNs, namely 2D YOLOv2 and 3D U-Net, and combine them with a new fusion and image processing algorithms. Annotated volumes in this space are limited, and we have created synthetic data that mimics actual structures for training and testing our proposed approach. Promising results on this test data demonstrate the value of the technique and offers a methodology for 3D cell analysis in real microscopy imagery.
KW - 3D U-Net
KW - Cells
KW - Detection
KW - Instance Segmentation
KW - Localization
KW - YOLO
UR - http://www.scopus.com/inward/record.url?scp=85079275262&partnerID=8YFLogxK
U2 - 10.1109/GlobalSIP45357.2019.8969303
DO - 10.1109/GlobalSIP45357.2019.8969303
M3 - Conference contribution
AN - SCOPUS:85079275262
T3 - GlobalSIP 2019 - 7th IEEE Global Conference on Signal and Information Processing, Proceedings
BT - GlobalSIP 2019 - 7th IEEE Global Conference on Signal and Information Processing, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 7th IEEE Global Conference on Signal and Information Processing, GlobalSIP 2019
Y2 - 11 November 2019 through 14 November 2019
ER -