Automatic 3D neuron reconstruction is critical for analysing the morphology and functionality of neurons in brain circuit activities. However, the performance of existing tracing algorithms is hinged by the low image quality. Recently, a series of deep learning based segmentation methods have been proposed to improve the quality of raw 3D optical image stacks by removing noises and restoring neuronal structures from low-contrast background. Due to the variety of neuron morphology and the lack of large neuron datasets, most of current neuron segmentation models rely on introducing complex and specially-designed submodules to a base architecture with the aim of encoding better feature representations. Though successful, extra burden would be put on computation during inference. Therefore, rather than modifying the base network, we shift our focus to the dataset itself. The encoder-decoder backbone used in most neuron segmentation models attends only intra-volume voxel points to learn structural features of neurons but neglect the shared intrinsic semantic features of voxels belonging to the same category among different volumes, which is also important for expressive representation learning. Hence, to better utilise the scarce dataset, we propose to explicitly exploit such intrinsic features of voxels through a novel voxel-level cross-volume representation learning paradigm on the basis of an encoder-decoder segmentation model. Our method introduces no extra cost during inference. Evaluated on 42 3D neuron images from BigNeuron project, our proposed method is demonstrated to improve the learning ability of the original segmentation model and further enhancing the reconstruction performance.
翻译:自动 3D 神经神经元重建对于分析脑电路活动中神经元的形态和功能至关重要。 然而, 现有的跟踪算法的性能取决于图像质量低。 最近, 提出了一系列基于深深层学习的分解法, 以提高原始 3D 光学成像堆的质量, 清除噪音, 从低调背景中恢复神经结构。 由于神经形态的多样化和缺乏大型神经数据集, 目前大部分神经分解模型依赖于将复杂和专门设计的子模块引入一个基础结构, 目的是编码更好的特征表示。 虽然在推断期间, 将给计算增加额外负担。 因此, 我们不是修改基网络, 而是将重点转移到数据集本身。 大多数神经分解模型中使用的电解码-解码骨架只用于学习神经模型的结构特征, 但却忽略了同一类的 voxel 的内在内在分质特征, 这对表达特征的表达方式也很重要。 因此, 我们通过在预言的剖面演法中, 改进了我们内部的分解法的分解方法, 学习了我们内部的分解方法 。