Existing image segmentation networks mainly leverage large-scale labeled datasets to attain high accuracy. However, labeling medical images is very expensive since it requires sophisticated expert knowledge. Thus, it is more desirable to employ only a few labeled data in pursuing high segmentation performance. In this paper, we develop a data augmentation method for one-shot brain magnetic resonance imaging (MRI) image segmentation which exploits only one labeled MRI image (named atlas) and a few unlabeled images. In particular, we propose to learn the probability distributions of deformations (including shapes and intensities) of different unlabeled MRI images with respect to the atlas via 3D variational autoencoders (VAEs). In this manner, our method is able to exploit the learned distributions of image deformations to generate new authentic brain MRI images, and the number of generated samples will be sufficient to train a deep segmentation network. Furthermore, we introduce a new standard segmentation benchmark to evaluate the generalization performance of a segmentation network through a cross-dataset setting (collected from different sources). Extensive experiments demonstrate that our method outperforms the state-of-the-art one-shot medical segmentation methods. Our code has been released at https://github.com/dyh127/Modeling-the-Probabilistic-Distribution-of-Unlabeled-Data.
翻译:现有图像分解网络主要利用大型标签标签数据集实现高精度。 然而, 贴上医疗图像标签非常昂贵, 因为它需要精密的专家知识。 因此, 更可取的做法是在追求高分解性能时只使用少数标签数据。 在本文中, 我们开发了一种数据增强方法, 用于一次性脑磁共振成像( MRI) 图像分解, 仅利用一个标签的 MRI 图像( 代号地图集) 和几个未贴标签的图像。 特别是, 我们提议通过一个交叉数据集( 由不同来源收集), 了解不同未贴标签的 MRI 图像的变形( 包括形状和强度) 的概率分布。 这样, 我们的方法能够利用所学的图像分解分布来生成新的真实的大脑 MRI 图像, 生成的样本数量将足以训练一个深度分解网络。 此外, 我们引入一个新的标准分解基准, 评估一个分解网络( 由不同来源收集的) 的交叉数据集成的常规/ 。 版本实验显示我们的方法已经超越了我们的方法。 版本/ 。 我们的分解分解分解法已经超越了 。 。