Detectingandsegmentingobjectswithinwholeslideimagesis essential in computational pathology workflow. Self-supervised learning (SSL) is appealing to such annotation-heavy tasks. Despite the extensive benchmarks in natural images for dense tasks, such studies are, unfortunately, absent in current works for pathology. Our paper intends to narrow this gap. We first benchmark representative SSL methods for dense prediction tasks in pathology images. Then, we propose concept contrastive learning (ConCL), an SSL framework for dense pre-training. We explore how ConCL performs with concepts provided by different sources and end up with proposing a simple dependency-free concept generating method that does not rely on external segmentation algorithms or saliency detection models. Extensive experiments demonstrate the superiority of ConCL over previous state-of-the-art SSL methods across different settings. Along our exploration, we distll several important and intriguing components contributing to the success of dense pre-training for pathology images. We hope this work could provide useful data points and encourage the community to conduct ConCL pre-training for problems of interest. Code is available.
翻译:在计算病理工作流程中必不可少的全外滑动模型内检测和分解对象。自监学习(SSL)吸引了这种说明性重度任务。尽管自然图像中有大量关于密集任务的基准,但不幸的是,目前病理学工作中却缺少这类研究。我们的论文打算缩小这一差距。我们首先为病理学图像中的密集预测任务确定具有代表性的代表SSL方法的基准。然后,我们提议概念对比学习(ConCL),这是密集训练前的SSL框架。我们探索CLC如何使用不同来源提供的概念进行操作,最后提出一种简单的不依赖性生成概念的方法,而不依赖外部分解算法或突出的检测模型。广泛的实验表明ConCL优于以往不同环境中的尖端SSL方法。在我们的探索中,我们分解了有助于病理学图像密集预培训成功的若干重要和令人感兴趣的组成部分。我们希望这项工作能够提供有用的数据点,并鼓励社区对CLCL进行兴趣问题预先培训。