Although self-supervised learning enables us to bootstrap the training by exploiting unlabeled data, the generic self-supervised methods for natural images do not sufficiently incorporate the context. For medical images, a desirable method should be sensitive enough to detect deviation from normal-appearing tissue of each anatomical region; here, anatomy is the context. We introduce a novel approach with two levels of self-supervised representation learning objectives: one on the regional anatomical level and another on the patient-level. We use graph neural networks to incorporate the relationship between different anatomical regions. The structure of the graph is informed by anatomical correspondences between each patient and an anatomical atlas. In addition, the graph representation has the advantage of handling any arbitrarily sized image in full resolution. Experiments on large-scale Computer Tomography (CT) datasets of lung images show that our approach compares favorably to baseline methods that do not account for the context. We use the learned embedding for staging lung tissue abnormalities related to COVID-19.
翻译:尽管自我监督的学习使我们能够利用未贴标签的数据来控制培训,但通用的自然图像自监督方法并没有充分纳入上下文。对于医疗图像来说,可取的方法应该足够敏感,足以检测出每个解剖区域正常出现的组织;这里,解剖是背景。我们引入了一种新型方法,具有两个层面的自监督的代表学习目标:一个是区域解剖层,另一个是病人一级。我们使用图形神经网络来整合不同解剖区域之间的关系。图表的结构以每个病人之间的解剖通信和一个解剖图册为参考。此外,图表的显示具有处理任何完全分辨率的任意大小图像的优势。大规模计算机地形学实验(CT)肺部图像数据集表明,我们的方法优于不计入上下文的基线方法。我们使用所学的嵌入来形成与COVID-19有关的肺组织异常。