Unsupervised domain adaptation (UDA) has been vastly explored to alleviate domain shifts between source and target domains, by applying a well-performed model in an unlabeled target domain via supervision of a labeled source domain. Recent literature, however, has indicated that the performance is still far from satisfactory in the presence of significant domain shifts. Nonetheless, delineating a few target samples is usually manageable and particularly worthwhile, due to the substantial performance gain. Inspired by this, we aim to develop semi-supervised domain adaptation (SSDA) for medical image segmentation, which is largely underexplored. We, thus, propose to exploit both labeled source and target domain data, in addition to unlabeled target data in a unified manner. Specifically, we present a novel asymmetric co-training (ACT) framework to integrate these subsets and avoid the domination of the source domain data. Following a divide-and-conquer strategy, we explicitly decouple the label supervisions in SSDA into two asymmetric sub-tasks, including semi-supervised learning (SSL) and UDA, and leverage different knowledge from two segmentors to take into account the distinction between the source and target label supervisions. The knowledge learned in the two modules is then adaptively integrated with ACT, by iteratively teaching each other, based on the confidence-aware pseudo-label. In addition, pseudo label noise is well-controlled with an exponential MixUp decay scheme for smooth propagation. Experiments on cross-modality brain tumor MRI segmentation tasks using the BraTS18 database showed, even with limited labeled target samples, ACT yielded marked improvements over UDA and state-of-the-art SSDA methods and approached an "upper bound" of supervised joint training.
翻译:未监督的域适应(UDA)已被广泛探讨,以缓解源与目标域之间的域变化,为此,通过监管标签源域,在未贴标签的目标域内,通过对标签源域的监管,在未贴标签的目标域内,应用了一种完善的模型。然而,最近的文献表明,在存在显著域变的情况下,业绩仍然远远不能令人满意。然而,由于绩效的大幅提高,对几个目标样本进行分解通常可以管理,而且特别值得。受此启发,我们打算开发半监督的域适应(SSSDA),以缓解源和目标域之间的域变化。因此,我们提议除了以统一的方式利用未贴标签的目标域域域域数据外,在未贴标签的目标域域域域域域内应用一种完善模型。我们提出一个新的不对称共同训练框架,以整合这些组群集和避免源域数据的支配。我们明确将SDADA的标签监管分解成两种不对称的子任务,包括半超级的SL(SSL)和UDA,以及利用两组的标准化的标准化的标准化的分类图中的不同知识, 将SADADA 学习到两个目标模型中。