In this paper, we target self-supervised representation learning for zero-shot tumor segmentation. We make the following contributions: First, we advocate a zero-shot setting, where models from pre-training should be directly applicable for the downstream task, without using any manual annotations. Second, we take inspiration from "layer-decomposition", and innovate on the training regime with simulated tumor data. Third, we conduct extensive ablation studies to analyse the critical components in data simulation, and validate the necessity of different proxy tasks. We demonstrate that, with sufficient texture randomization in simulation, model trained on synthetic data can effortlessly generalise to segment real tumor data. Forth, our approach achieves superior results for zero-shot tumor segmentation on different downstream datasets, BraTS2018 for brain tumor segmentation and LiTS2017 for liver tumor segmentation. While evaluating the model transferability for tumor segmentation under a low-annotation regime, the proposed approach also outperforms all existing self-supervised approaches, opening up the usage of self-supervised learning in practical scenarios.
翻译:在本文中,我们的目标是为零光肿瘤分解进行自我监督的演示学习。 我们做出以下贡献: 首先, 我们主张零点设置, 培训前的模型应该直接适用于下游任务, 不使用任何手动说明。 其次, 我们从“ 层分解” 中得到灵感, 并且用模拟肿瘤数据对培训制度进行创新。 第三, 我们进行广泛的对比研究, 分析数据模拟中的关键组成部分, 并验证不同代用任务的必要性 。 我们证明, 在模拟中有足够的纹理随机化, 合成数据培训的模型可以不费力地概括真实肿瘤数据。 福特, 我们的方法在不同下游数据集的零发肿瘤分解、 脑肿瘤分解的BRATS2018 和肝肿瘤分解的LTS2017 取得优异效果。 在评估低分解制度下肿瘤分解模式的可转移性时, 拟议的方法也超越了所有现有的自监督方法, 开启了实际情景中自我监督学习的用途。