The appearance of histopathology images depends on tissue type, staining and digitization procedure. These vary from source to source and are the potential causes for domain-shift problems. Owing to this problem, despite the great success of deep learning models in computational pathology, a model trained on a specific domain may still perform sub-optimally when we apply them to another domain. To overcome this, we propose a new augmentation called PatchShuffling and a novel self-supervised contrastive learning framework named IMPaSh for pre-training deep learning models. Using these, we obtained a ResNet50 encoder that can extract image representation resistant to domain-shift. We compared our derived representation against those acquired based on other domain-generalization techniques by using them for the cross-domain classification of colorectal tissue images. We show that the proposed method outperforms other traditional histology domain-adaptation and state-of-the-art self-supervised learning methods. Code is available at: https://github.com/trinhvg/IMPash .
翻译:生理病理学图象的外观取决于组织类型、污点和数字化程序。这些图象的来源和来源各不相同,是域变问题的潜在原因。由于这个问题,尽管计算病理方面的深学习模型非常成功,但是在特定领域受过培训的模型在应用到另一个领域时仍可发挥亚最佳性作用。为了克服这一点,我们提议建立一个称为PatchSheffing的新的扩增系统,并建立一个名为IMPaSh的新颖的自我监督对比学习框架,用于进行深层学习前的学习模型。使用这些模型,我们获得了一个ResNet50编码器,可以提取不耐域变的图像表示法。我们用这些模型对基于其他域化技术获得的表示法进行了比较,办法是利用这些模型对结层组织图象进行交叉分类。我们表明,拟议的方法超越了其他传统的直系域-适应和州-艺术自监督的学习方法。代码见:https://github.com/trinhvg/IMPashash。