Despite the success of convolutional neural networks for 3D medical-image segmentation, the architectures currently used are still not robust enough to the protocols of different scanners, and the variety of image properties they produce. Moreover, access to large-scale datasets with annotated regions of interest is scarce, and obtaining good results is thus difficult. To overcome these challenges, we introduce IB-U-Nets, a novel architecture with inductive bias, inspired by the visual processing in vertebrates. With the 3D U-Net as the base, we add two 3D residual components to the second encoder blocks. They provide an inductive bias, helping U-Nets to segment anatomical structures from 3D images with increased robustness and accuracy. We compared IB-U-Nets with state-of-the-art 3D U-Nets on multiple modalities and organs, such as the prostate and spleen, using the same training and testing pipeline, including data processing, augmentation and cross-validation. Our results demonstrate the superior robustness and accuracy of IB-U-Nets, especially on small datasets, as is typically the case in medical-image analysis. IB-U-Nets source code and models are publicly available.
翻译:尽管3D医学图像分割的进化神经网络取得了成功,但目前使用的建筑结构仍然不够强大,无法满足不同扫描仪的规程,而且其产生的图像特性也各不相同。此外,获得具有附加说明的热点区域大规模数据集的机会很少,因此很难取得良好的结果。为了克服这些挑战,我们引入了IB-U-Net,这是一个具有感应偏差的新结构,这是在脊椎动物视觉处理的启发下建立的。以3D U-Net为基地,我们在第二个编码区块中添加了两个3D剩余部件。它们提供了一种感化偏差,帮助U-Net从3D图像中获取部分解剖结构,其强度和准确性更高。我们用相同的培训和测试管道,包括数据处理、增强和交叉校验。我们的结果显示IB-U-Net的超强性和准确性能和精确性能结构,特别是用于公共代码分析的IB-U-Net模式和普通的IB-U-U-mings 数据模型是典型的。