MICROSCOPY IMAGE REGISTRATION, SYNTHESIS AND SEGMENTATION Chichen Fu 10.25394/PGS.7754981.v1 https://hammer.purdue.edu/articles/thesis/MICROSCOPY_IMAGE_REGISTRATION_SYNTHESIS_AND_SEGMENTATION/7754981 <div>Fluorescence microscopy has emerged as a powerful tool for studying cell biology because it enables the acquisition of 3D image volumes deeper into tissue and the imaging of complex subcellular structures. Fluorescence microscopy images are frequently distorted by motion resulting from animal respiration and heartbeat which complicates the quantitative analysis of biological structures needed to characterize the structure and constituency of tissue volumes. This thesis describes a two pronged approach to quantitative analysis consisting of non-rigid registration and deep convolutional neural network segmentation. The proposed image registration method is capable of correcting motion artifacts in three dimensional fluorescence microscopy images collected over time. In particular, our method uses 3D B-Spline based nonrigid registration using a coarse-to-fine strategy to register stacks of images collected at different time intervals and 4D rigid registration to register 3D volumes over time. The results show that the proposed method has the ability of correcting global motion artifacts of sample tissues in four dimensional space, thereby revealing the motility of individual cells in the tissue.</div><div><br></div><div>We describe in thesis nuclei segmentation methods using deep convolutional neural networks, data augmentation to generate training images of different shapes and contrasts, a refinement process combining segmentation results of horizontal, frontal, and sagittal planes in a volume, and a watershed technique to enumerate the nuclei. Our results indicate that compared to 3D ground truth data, our method can successfully segment and count 3D nuclei. Furthermore, a microscopy image synthesis method based on spatially constrained cycle-consistent adversarial networks is used to efficiently generate training data. A 3D modified U-Net network is trained with a combination of Dice loss and binary cross entropy metrics to achieve accurate nuclei segmentation. A multi-task U-Net is utilized to resolve overlapping nuclei. This method was found to achieve high accuracy object-based and voxel-based evaluations.</div> 2019-06-10 16:33:18 microscopy imaging analyses Deep Learning-Based Segmentation Microscopy image synthesis microscopy image segmentation Deep Learning-Based Biomedical Image Synthesis Electrical and Electronic Engineering not elsewhere classified Computer Engineering