Self-supervised learning (SSL) has drawn increasing attention in pathological image analysis in recent years. Compared to contrastive learning which requires careful design, masked autoencoders (MAE) building SSL from a generative paradigm probably is a simpler method. In this paper, we introduce MAE and verify the effect of visible patches for pathological image classification. Based on it, a novel SD-MAE model is proposed to enable a self-distillation augmented SSL on top of the raw MAE. Besides the reconstruction loss on masked image patches, SD-MAE further imposes the self-distillation loss on visible patches. It transfers knowledge brought by the global attention of the decoder to the encoder which only uses local attention. We apply SD-MAE on two public pathological image datasets. Experiments demonstrate that SD-MAE performs highly competitive when compared with other SSL methods. Our code will be released soon.
翻译:近些年来,自我监督的学习(SSL)在病理图象分析中引起越来越多的注意。与需要仔细设计的对比性学习相比,用基因范式建立 SSL 的蒙面自动编码器(MAE)可能是比较简单的方法。在本文中,我们引入MAE并核查可见的补丁对病理图象分类的影响。在此基础上,提出了一个新的SD-MAE模型,以便能够在原始MAE之上进行自我蒸馏增强SSL。除了蒙面图像补丁的重建损失外,SD-MAE还给可见的补丁造成自我蒸馏损失。它把全球注意的解码器带来的知识转移到只使用当地注意力的编码器上。我们在两个公共病理图象数据集中应用SDMAE。实验表明,SD-MAE与其他SSL方法相比具有高度竞争力。我们的代码将很快发布。