Computational pathology can lead to saving human lives, but models are annotation hungry and pathology images are notoriously expensive to annotate. Self-supervised learning has shown to be an effective method for utilizing unlabeled data, and its application to pathology could greatly benefit its downstream tasks. Yet, there are no principled studies that compare SSL methods and discuss how to adapt them for pathology. To address this need, we execute the largest-scale study of SSL pre-training on pathology image data, to date. Our study is conducted using 4 representative SSL methods on diverse downstream tasks. We establish that large-scale domain-aligned pre-training in pathology consistently out-performs ImageNet pre-training in standard SSL settings such as linear and fine-tuning evaluations, as well as in low-label regimes. Moreover, we propose a set of domain-specific techniques that we experimentally show leads to a performance boost. Lastly, for the first time, we apply SSL to the challenging task of nuclei instance segmentation and show large and consistent performance improvements under diverse settings.
翻译:计算病理学可以拯救人类生命,但模型是饥饿的注解,病理学图像对于笔记员来说代价高昂。自我监督的学习已证明是使用未贴标签数据的有效方法,而将其应用于病理学可以大大有利于其下游任务。然而,没有进行原则性研究,比较SSL方法并讨论如何将其应用于病理学。为了解决这一需要,我们迄今为止对SSL病理图象数据培训前进行了规模最大的研究。我们的研究使用4种具有代表性的SSL方法进行各种下游任务的研究。我们确定,在标准SSL环境中,如线形和微调评估以及低标签制度,大规模与主域域一致的预培训在病理学上一贯优于图像网前的培训。此外,我们提出了一套我们实验显示可带来绩效增强的域性技术。最后,我们首次应用SSL进行具有挑战性的工作,即核素分解,并显示在不同环境中的大规模和连贯的性能改进。