Vision Transformers (ViTs) and their multi-scale and hierarchical variations have been successful at capturing image representations but their use has been generally studied for low-resolution images (e.g. - 256x256, 384384). For gigapixel whole-slide imaging (WSI) in computational pathology, WSIs can be as large as 150000x150000 pixels at 20X magnification and exhibit a hierarchical structure of visual tokens across varying resolutions: from 16x16 images capture spatial patterns among cells, to 4096x4096 images characterizing interactions within the tissue microenvironment. We introduce a new ViT architecture called the Hierarchical Image Pyramid Transformer (HIPT), which leverages the natural hierarchical structure inherent in WSIs using two levels of self-supervised learning to learn high-resolution image representations. HIPT is pretrained across 33 cancer types using 10,678 gigapixel WSIs, 408,218 4096x4096 images, and 104M 256x256 images. We benchmark HIPT representations on 9 slide-level tasks, and demonstrate that: 1) HIPT with hierarchical pretraining outperforms current state-of-the-art methods for cancer subtyping and survival prediction, 2) self-supervised ViTs are able to model important inductive biases about the hierarchical structure of phenotypes in the tumor microenvironment.
翻译:视觉变异器(ViTs)及其多级和分级变异器成功地捕捉了图像图示,但通常对低分辨率图像(例如 - 256x256,384384)使用这些图示(例如 - 256x256,384384)进行了研究。 对于计算病理学中的GGAPlixel整流成像(WSI),20X放大仪的WSI可高达150 000x150 000像素,并展示了不同分辨率的视觉象征的等级结构:从16x16图象捕捉了各细胞之间的空间模式,到4096x4096图象以组织微观环境内的互动为特征。我们引入了一个新的ViT结构,称为HIP结构,即高等级图像成像变异器(HIPT),它利用两层次的自我监督学习学习来利用WSI的自然等级结构学习高分辨率图像图示。 HIPT在33种癌症类型中,使用10,678千千兆焦平基的WSI,40184096x4096x4096图象,以及104M 256的图象。 我们以256x256以显示组织微型环境环境环境环境环境变化图象为特征图象。 我们以9级结构为基准化结构为基准的HIT图示图示图示图示,在9级结构的高级前的高级图示。