Segmentation of organs of interest in 3D medical images is necessary for accurate diagnosis and longitudinal studies. Though recent advances using deep learning have shown success for many segmentation tasks, large datasets are required for high performance and the annotation process is both time consuming and labor intensive. In this paper, we propose a 3D few shot segmentation framework for accurate organ segmentation using limited training samples of the target organ annotation. To achieve this, a U-Net like network is designed to predict segmentation by learning the relationship between 2D slices of support data and a query image, including a bidirectional gated recurrent unit (GRU) that learns consistency of encoded features between adjacent slices. Also, we introduce a transfer learning method to adapt the characteristics of the target image and organ by updating the model before testing with arbitrary support and query data sampled from the support data. We evaluate our proposed model using three 3D CT datasets with annotations of different organs. Our model yielded significantly improved performance over state-of-the-art few shot segmentation models and was comparable to a fully supervised model trained with more target training data.
翻译:精确诊断和纵向研究需要3D医学图象中感兴趣的器官的分解。虽然最近利用深层学习取得的进展表明许多分解任务取得了成功,但高性能需要大量的数据集,批注过程既耗时又费力。在本文件中,我们提议利用目标器官注释有限的培训样本,为精确器官分解建立一个3D点数分解框架。为此,设计了一个类似U-Net的网络,通过学习2D支持数据切片和查询图象之间的关系,包括双向门端经常单元(GRU),学习相邻切片之间编码特征的一致性,从而预测分解。此外,我们采用了一种转移学习方法,在使用任意支持数据进行测试和查询数据样本之前更新模型,以调整目标图像和器官的特征。我们用三个带有不同器官说明的3DCT数据集来评估我们拟议的模型。我们的模型比一些最先进的分解模型提高了显著的性能,并且可以与经过更有针对性的培训的模型相比。