Transferring prior knowledge from a source domain to the same or similar target domain can greatly enhance the performance of models on the target domain. However, it is challenging to directly leverage the knowledge from the source domain due to task discrepancy and domain shift. To bridge the gaps between different tasks and domains, we propose a Multi-Head Feature Adaptation module, which projects features in the source feature space to a new space that is more similar to the target space. Knowledge transfer is particularly important in Whole Slide Image (WSI) classification since the number of WSIs in one dataset might be too small to achieve satisfactory performance. Therefore, WSI classification is an ideal testbed for our method, and we adapt multiple knowledge transfer methods for WSI classification. The experimental results show that models with knowledge transfer outperform models that are trained from scratch by a large margin regardless of the number of WSIs in the datasets, and our method achieves state-of-the-art performances among other knowledge transfer methods on multiple datasets, including TCGA-RCC, TCGA-NSCLC, and Camelyon16 datasets.
翻译:将先前的知识从源域转移到同一或相似的目标域可以大大提高目标域模型的性能。然而,由于任务差异和领域转移,直接利用源域知识具有挑战性,因为任务差异和领域转移造成直接利用源域知识具有挑战性。为了缩小不同任务和领域之间的差距,我们提议了一个多主特性适应模块,在源域中将多功能性地描述为空间更近于目标空间的新空间。知识转让在全幻灯片图像分类中特别重要,因为一个数据集中的WSI数量可能太小,无法达到令人满意的性能。因此,WSI分类是我们方法的理想测试台,我们为WSI分类调整了多种知识转让方法。实验结果表明,无论数据集中的WSI数量多少,知识转让模式从零到大范围培训的超能力模型,而且我们的方法在多个数据集(包括TCGA-RCC、TCGA-NSCLC和Camelyon16数据集)上达到最先进的性能。</s>