Self-supervised learning (SSL) opens up huge opportunities for better utilizing unlabeled data. It is essential for medical image analysis that is generally known for its lack of annotations. However, when we attempt to use as many as possible unlabeled medical images in SSL, breaking the dimension barrier (\ie, making it possible to jointly use both 2D and 3D images) becomes a must. In this paper, we propose a Universal Self-Supervised Transformer (USST) framework based on the student-teacher paradigm, aiming to leverage a huge of unlabeled medical data with multiple dimensions to learn rich representations. To achieve this, we design a Pyramid Transformer U-Net (PTU) as the backbone, which is composed of switchable patch embedding (SPE) layers and Transformer layers. The SPE layer switches to either 2D or 3D patch embedding depending on the input dimension. After that, the images are converted to a sequence regardless of their original dimensions. The Transformer layer then models the long-term dependencies in a sequence-to-sequence manner, thus enabling USST to learn representations from both 2D and 3D images. USST has two obvious merits compared to current dimension-specific SSL: (1) \textbf{more effective} - can learn representations from more and diverse data; and (2) \textbf{more versatile} - can be transferred to various downstream tasks. The results show that USST provides promising results on six 2D/3D medical image classification and segmentation tasks, outperforming the supervised ImageNet pre-training and advanced SSL counterparts substantially.
翻译:自我监督学习( SSL) 为更好地使用未贴标签的数据提供了巨大的机会。 这对于医学图像分析至关重要, 医学图像分析通常以缺乏注释而为人所知。 但是, 当我们试图在 SSL 中尽量使用无标签的医疗图像时, 打破维度屏障( \, 使得有可能同时使用 2D 和 3D 图像) 。 在此文件中, 我们提议基于师生模式的通用自监督变换器框架( SSL), 目的是利用大量的无标签的医疗数据, 学习丰富的演示。 为了实现这一点, 我们设计了一个由骨干组成的“ 系统变换机 U- Net 网络 ” 作为主干, 从而可以打开可切换的补丁嵌入( SPE) 层和 变换机层。 SPE 将图像转换成一个序列, 而不考虑其最初的多维度。 变机层 将长期的解算模型以序列到后代为主 。 我们设计了 Pyramidal Transader 3 和 S- delflegetal deal deal develop extistrueal 。