There exists a large number of datasets for organ segmentation, which are partially annotated, and sequentially constructed. A typical dataset is constructed at a certain time by curating medical images and annotating the organs of interest. In other words, new datasets with annotations of new organ categories are built over time. To unleash the potential behind these partially labeled, sequentially-constructed datasets, we propose to learn a multi-organ segmentation model through incremental learning (IL). In each IL stage, we lose access to the previous annotations, whose knowledge is assumingly captured by the current model, and gain the access to a new dataset with annotations of new organ categories, from which we learn to update the organ segmentation model to include the new organs. We give the first attempt to conjecture that the different distribution is the key reason for 'catastrophic forgetting' that commonly exists in IL methods, and verify that IL has the natural adaptability to medical image scenarios. Extensive experiments on five open-sourced datasets are conducted to prove the effectiveness of our method and the conjecture mentioned above.
翻译:存在大量器官分解数据集,这些数据集部分是附加说明的,并按顺序构建。典型的数据集是在某个时候通过分析医学图像和说明感兴趣的器官来构建的。换句话说,新数据集加上新器官类别的说明是随着时间的推移而构建的。为了释放这些部分标签的、按顺序构建的数据集背后的潜力,我们提议通过渐进学习来学习一个多器官分解模型。在每一个IL阶段,我们失去了对前一个说明的存取,而该说明的知识被目前的模型假定捕获,并获得新器官类别说明的新数据集的存取,从中我们学会更新器官分解模型以包括新的器官。我们第一次试图推断,不同的分布是“催化性遗忘”的主要原因,而IL方法中通常存在这种“催化性遗忘”的关键原因,并核实IL具有对医学图像情景的自然适应性。在五个开源数据集上进行了广泛的实验,以证明我们的方法的有效性和上述的推测。