Increasing data set sizes of 3D microscopy imaging experiments demand for an automation of segmentation processes to be able to extract meaningful biomedical information. Due to the shortage of annotated 3D image data that can be used for machine learning-based approaches, 3D segmentation approaches are required to be robust and to generalize well to unseen data. The Cellpose approach proposed by Stringer \textit{et al.} \cite{stringer2020} proved to be such a generalist approach for cell instance segmentation tasks. In this paper, we extend the Cellpose approach to improve segmentation accuracy on 3D image data and we further show how the formulation of the gradient maps can be simplified while still being robust and reaching similar segmentation accuracy. The code is publicly available and was integrated into two established open-source applications that allow using the 3D extension of Cellpose without any programming knowledge.
翻译:3D显微镜成像实验的数据集越大,3D显微镜成像实验要求分离过程自动化,以便能够提取有意义的生物医学信息。由于可用于机器学习方法的附加说明的3D图像数据短缺,3D分解方法必须稳健,并广泛归纳到不可见的数据中。 Stringer \ textit{et al.}\cite{stringer2020}建议的细胞定位方法证明是细胞分解任务的通识方法。在本文件中,我们扩展了细胞定位方法,以提高3D图像数据的分解准确性。我们进一步展示了如何简化梯度图的编制,同时保持稳健,并达到类似的分解准确性。代码是公开的,并被纳入了两个既定的开放源应用程序,允许在没有任何编程知识的情况下使用3D分解扩展。