Most existing approaches to train a unified multi-organ segmentation model from several single-organ datasets require simultaneously access multiple datasets during training. In the real scenarios, due to privacy and ethics concerns, the training data of the organs of interest may not be publicly available. To this end, we investigate a data-free incremental organ segmentation scenario and propose a novel incremental training framework to solve it. We use the pretrained model instead of its own training data for privacy protection. Specifically, given a pretrained $K$ organ segmentation model and a new single-organ dataset, we train a unified $K+1$ organ segmentation model without accessing any data belonging to the previous training stages. Our approach consists of two parts: the background label alignment strategy and the uncertainty-aware guidance strategy. The first part is used for knowledge transfer from the pretained model to the training model. The second part is used to extract the uncertainty information from the pretrained model to guide the whole knowledge transfer process. By combing these two strategies, more reliable information is extracted from the pretrained model without original training data. Experiments on multiple publicly available pretrained models and a multi-organ dataset MOBA have demonstrated the effectiveness of our framework.
翻译:在实际情况下,由于隐私和道德问题,可能无法公开提供感兴趣的机构的培训数据。为此,我们调查无数据递增器官分割设想方案,并提出一个新的递增培训框架来解决这个问题。我们使用预先培训模式而不是自己的培训数据来保护隐私。具体地说,鉴于预先培训的1K美元器官分割模式和新的1个单一机构数据集,我们培训一个统一的1K+1美元的器官分割模式,而没有获得属于前几个培训阶段的任何数据。我们的方法由两部分组成:背景标签调整战略和不确定性意识指导战略。第一部分用于从预设模式向培训模式进行知识转让。第二部分用于从预培训模式中提取不确定性信息,以指导整个知识转移进程。通过对这两项战略进行梳理,从预培训模式中提取更可靠的信息,而没有原始的培训数据。实验了多种公开提供的预设模型和多机组数据配置框架。