In this paper, we propose a self-supervised approach for tumor segmentation. Specifically, we advocate a zero-shot setting, where models from self-supervised learning should be directly applicable for the downstream task, without using any manual annotations whatsoever. We make the following contributions. First, with careful examination on existing self-supervised learning approaches, we reveal the surprising result that, given suitable data augmentation, models trained from scratch in fact achieve comparable performance to those pre-trained with self-supervised learning. Second, inspired by the fact that tumors tend to be characterized independently to the contexts, we propose a scalable pipeline for generating synthetic tumor data, and train a self-supervised model that minimises the generalisation gap with the downstream task. Third, we conduct extensive ablation studies on different downstream datasets, BraTS2018 for brain tumor segmentation and LiTS2017 for liver tumor segmentation. While evaluating the model transferability for tumor segmentation under a low-annotation regime, including an extreme case of zero-shot segmentation, the proposed approach demonstrates state-of-the-art performance, substantially outperforming all existing self-supervised approaches, and opening up the usage of self-supervised learning in practical scenarios.
翻译:在本文中,我们提出了一种自我监督的肿瘤分解方法。 具体地说,我们主张一种零点设置, 由自我监督的学习模式直接适用于下游任务, 而不使用任何人工说明。 我们做出以下贡献。 首先, 我们仔细研究现有的自监督的学习方法, 我们揭示出一个令人惊讶的结果, 即根据适当的数据扩增, 从零开始培训的模型实际上取得了与经过自我监督学习的先期培训的模型的类似性能。 第二, 受肿瘤往往在环境中独立定性这一事实的启发, 我们提出一种可升级的管道, 用于生成合成肿瘤数据, 并培训一种自监督的模式, 将下游任务的总体差距缩小到最小。 第三, 我们对不同的下游数据集、 大脑分解的Brats-2018 和肝肿瘤分解的Lits2017 进行了广泛的反动研究。 在评估肿瘤分解模式在低度制度下可转移性时, 包括一个极端的零分解分解情况, 提议的方法展示了现有自我监督、 实际学习的所有自我监督方法的状态。