Many medical datasets have recently been created for medical image segmentation tasks, and it is natural to question whether we can use them to sequentially train a single model that (1) performs better on all these datasets, and (2) generalizes well and transfers better to the unknown target site domain. Prior works have achieved this goal by jointly training one model on multi-site datasets, which achieve competitive performance on average but such methods rely on the assumption about the availability of all training data, thus limiting its effectiveness in practical deployment. In this paper, we propose a novel multi-site segmentation framework called incremental-transfer learning (ITL), which learns a model from multi-site datasets in an end-to-end sequential fashion. Specifically, "incremental" refers to training sequentially constructed datasets, and "transfer" is achieved by leveraging useful information from the linear combination of embedding features on each dataset. In addition, we introduce our ITL framework, where we train the network including a site-agnostic encoder with pre-trained weights and at most two segmentation decoder heads. We also design a novel site-level incremental loss in order to generalize well on the target domain. Second, we show for the first time that leveraging our ITL training scheme is able to alleviate challenging catastrophic forgetting problems in incremental learning. We conduct experiments using five challenging benchmark datasets to validate the effectiveness of our incremental-transfer learning approach. Our approach makes minimal assumptions on computation resources and domain-specific expertise, and hence constitutes a strong starting point in multi-site medical image segmentation.
翻译:最近为医疗图像分解任务创建了许多医疗数据集,我们很自然地会质疑我们是否能够利用它们来按部就班地培训一个单一模型:(1) 更好地完成所有这些数据集的工作,(2) 全面概括,更好地转移到未知的目标网站域。 先前的工作已经通过联合培训一个多站数据集模型来实现这一目标,该模型平均达到竞争性性能,但这种方法依赖于对所有培训数据的提供情况的假设,从而限制了其实际部署的有效性。 在本文件中,我们提出了一个名为递增转移学习(ITL)的新颖的多站点分解框架,它从多站级数据集中学习一个模型,以端到端顺序方式更好地进行,以及(2) 以更精确的方式,从多站级数据集中培训一个模型,通过利用每套数据集嵌入功能的线性组合获得有用的信息,实现“传输”。 此外,我们介绍了我们的国际交易日志框架,我们在那里培训网络,包括一个具有培训前的精度的精度的现场分解分解(ITL),它从多站级数据集中学习一个模型,以端至端至端分解头两个分解(IT),从多站点学习一个模型的模型。我们还设计了一个创新的递化的递化的递化模型,在升级的递增缩的递增目标点上, 开始一个时间级的递增缩的实验, 开始开始一个测试, 开始一个我们系统,在升级的递增缩的实验,在升级的实验,让我们级的实验,在总的实验,在升级的实验。