The examination of histopathology images is considered to be the gold standard for the diagnosis and stratification of cancer patients. A key challenge in the analysis of such images is their size, which can run into the gigapixels and can require tedious screening by clinicians. With the recent advances in computational medicine, automatic tools have been proposed to assist clinicians in their everyday practice. Such tools typically process these large images by slicing them into tiles that can then be encoded and utilized for different clinical models. In this study, we propose a novel generative framework that can learn powerful representations for such tiles by learning to plausibly expand their visual field. In particular, we developed a progressively grown generative model with the objective of visual field expansion. Thus trained, our model learns to generate different tissue types with fine details, while simultaneously learning powerful representations that can be used for different clinical endpoints, all in a self-supervised way. To evaluate the performance of our model, we conducted classification experiments on CAMELYON17 and CRC benchmark datasets, comparing favorably to other self-supervised and pre-trained strategies that are commonly used in digital pathology. Our code is available at https://github.com/jcboyd/cdpath21-gan.
翻译:肿瘤病理学图像的检查被认为是癌症病人诊断和分层的金质标准。分析这些图像的一个关键挑战在于其大小,因为其大小可以进入千兆字节,需要临床医生进行沉闷的筛选。随着计算医学的最近进展,已经提议了自动工具来帮助临床医生的日常实践。这些工具通常通过将这些大图像切成砖,然后将其编码并用于不同的临床模型来处理这些大图像。在这项研究中,我们提出了一个新的基因化框架,通过学习如何扩大视觉领域来学习这些瓷砖的强大表现。特别是,我们开发了一种逐渐增长的基因模型,目的是扩大视野。经过培训,我们的模型学会产生不同的组织类型,细细细细节,同时学习可用于不同临床终点的有力表现,所有这些都以自我控制的方式进行。为了评估我们的模型的性能,我们在CAMELYON17和CRC基准数据集中进行了分类实验,将这种模型与其他自校准和预校准的模型进行比较。我们通常在数字路径中使用的MAURD21/ARCD战略。