Volumetric cell segmentation in fluorescence microscopy images is important to study a wide variety of cellular processes. Applications range from the analysis of cancer cells to behavioral studies of cells in the embryonic stage. Like in other computer vision fields, most recent methods use either large convolutional neural networks (CNNs) or vision transformer models (ViTs). Since the number of available 3D microscopy images is typically limited in applications, we take a different approach and introduce a small CNN for volumetric cell segmentation. Compared to previous CNN models for cell segmentation, our model is efficient and has an asymmetric encoder-decoder structure with very few parameters in the decoder. Training efficiency is further improved via transfer learning. In addition, we introduce Context Aware Pseudocoloring to exploit spatial context in z-direction of 3D images while performing volumetric cell segmentation slice-wise. We evaluated our method using different 3D datasets from the Cell Segmentation Benchmark of the Cell Tracking Challenge. Our segmentation method achieves top-ranking results, while our CNN model has an up to 25x lower number of parameters than other top-ranking methods. Code and pretrained models are available at: https://github.com/roydenwa/efficient-cell-seg
翻译:在荧光显微镜图像中,活体细胞分解对于研究广泛的细胞过程很重要。应用范围从癌症细胞分析到胚胎阶段细胞的行为研究等各种应用范围广泛。像其他计算机视觉领域一样,最近使用的方法有大型神经神经神经网络(CNNs)或视觉变异器模型(ViTs),由于现有3D显微镜图像的数量在应用中通常有限,我们采取了不同的方法,为体积细胞分解引入了小型CNN。与先前的CNN细胞分解模型相比,我们的模型效率很高,并且具有不对称的编码-脱coder结构,而脱coder的参数极少。通过转移学习进一步提高了培训效率。此外,我们引入了了解环境的Psedo彩色来利用3D图像的Z-方向空间环境,同时进行体积细胞分解切片。我们使用与细胞跟踪挑战的细胞分解基准不同的3D数据集评估了我们的方法。我们的分解方法达到了顶级结果,而我们的CNN模型具有最高至25x位的分解率,而我们的脱码模型比其他顶级码/节制的模型有25x次。