Whole Slide Image (WSI) analysis is a powerful method to facilitate the diagnosis of cancer in tissue samples. Automating this diagnosis poses various issues, most notably caused by the immense image resolution and limited annotations. WSIs commonly exhibit resolutions of 100Kx100K pixels. Annotating cancerous areas in WSIs on the pixel level is prohibitively labor-intensive and requires a high level of expert knowledge. Multiple instance learning (MIL) alleviates the need for expensive pixel-level annotations. In MIL, learning is performed on slide-level labels, in which a pathologist provides information about whether a slide includes cancerous tissue. Here, we propose Self-ViT-MIL, a novel approach for classifying and localizing cancerous areas based on slide-level annotations, eliminating the need for pixel-wise annotated training data. Self-ViT- MIL is pre-trained in a self-supervised setting to learn rich feature representation without relying on any labels. The recent Vision Transformer (ViT) architecture builds the feature extractor of Self-ViT-MIL. For localizing cancerous regions, a MIL aggregator with global attention is utilized. To the best of our knowledge, Self-ViT- MIL is the first approach to introduce self-supervised ViTs in MIL-based WSI analysis tasks. We showcase the effectiveness of our approach on the common Camelyon16 dataset. Self-ViT-MIL surpasses existing state-of-the-art MIL-based approaches in terms of accuracy and area under the curve (AUC).
翻译:整个幻灯片图像( WSII) 分析是便利组织样本中癌症诊断的有力方法。 自动分析这一诊断提出了各种问题, 主要是由巨大的图像解析和有限的说明造成的。 WSI通常显示100Kx100K像素的分辨率。 以像素水平在象素水平上指出癌症地区是令人望而却步的劳动密集程度, 需要高水平的专家知识。 多例学习( MIL) 减轻了昂贵像素级说明的需求。 在 MIL 中, 学习是在幻灯片级标签上进行的, 病理学家在幻灯片级标签中提供幻灯片是否包括癌症组织的信息。 在这里, 我们提议了自VIT- MIL, 这是基于幻灯片水平说明对癌症地区进行分类和本地化的新的方法, MIL 正在将我们目前常规的内值数据引入本地化的MIL 。 MIL 正在将本地级的内端点引入了我们常规的内值 MIL 。 MIL 的SIL 的S- 演示中, 正在使用本地的自我分析 MIL 。