Contemporary approaches to instance segmentation in cell science use 2D or 3D convolutional networks depending on the experiment and data structures. However, limitations in microscopy systems or efforts to prevent phototoxicity commonly require recording sub-optimally sampled data regimes that greatly reduces the utility of such 3D data, especially in crowded environments with significant axial overlap between objects. In such regimes, 2D segmentations are both more reliable for cell morphology and easier to annotate. In this work, we propose the Projection Enhancement Network (PEN), a novel convolutional module which processes the sub-sampled 3D data and produces a 2D RGB semantic compression, and is trained in conjunction with an instance segmentation network of choice to produce 2D segmentations. Our approach combines augmentation to increase cell density using a low-density cell image dataset to train PEN, and curated datasets to evaluate PEN. We show that with PEN, the learned semantic representation in CellPose encodes depth and greatly improves segmentation performance in comparison to maximum intensity projection images as input, but does not similarly aid segmentation in region-based networks like Mask-RCNN. Finally, we dissect the segmentation strength against cell density of PEN with CellPose on disseminated cells from side-by-side spheroids. We present PEN as a data-driven solution to form compressed representations of 3D data that improve 2D segmentations from instance segmentation networks.
翻译:在这种系统中,2D区段对于细胞形态学来说比较可靠,并且更容易作注释。在这项工作中,我们建议2D区段对细胞形态学来说比较可靠。我们建议,2D区段对细胞形态学来说比较可靠,对于细胞形态学来说比较容易作笔记。我们建议,2D区段对细胞形态学来说比较可靠,这是一个新颖的共进式模块,处理分印的 3D 数据,产生 2D RGB 区段压缩,并且与选择的区段网络一起培训,以产生 2D 区段。我们的方法结合了增强,以便使用低密度细胞图像数据集提高细胞密度,用于培训PEN,而弯曲数据集则比较了2D 区段的分级图案,我们用CelePose 系统所学到的内分流深度表示深度,大大改进了分解性表现,与最高密度的区段段图象图像相比,RGB 平面图解中的数据区段比,我们用SMA的密度分解方法,我们用C区段分解来改进了C区段。