The proliferation of Deep Learning (DL)-based methods for radiographic image analysis has created a great demand for expert-labeled radiology data. Recent self-supervised frameworks have alleviated the need for expert labeling by obtaining supervision from associated radiology reports. These frameworks, however, struggle to distinguish the subtle differences between different pathologies in medical images. Additionally, many of them do not provide interpretation between image regions and text, making it difficult for radiologists to assess model predictions. In this work, we propose Local Region Contrastive Learning (LRCLR), a flexible fine-tuning framework that adds layers for significant image region selection as well as cross-modality interaction. Our results on an external validation set of chest x-rays suggest that LRCLR identifies significant local image regions and provides meaningful interpretation against radiology text while improving zero-shot performance on several chest x-ray medical findings.
翻译:深度学习在放射学图像分析中的广泛应用创建了对专家标注分类数据的巨大需求。最近的自监督框架通过使用相关医学文献传递监督来减轻了对专家标注的需求。然而,这些框架在医学图像中区分不同病理学组织亚型间微小差异时存在困难。此外,许多框架不提供区域与文本之间的交互信息,难以让放射科医生评估模型预测结果。本研究提出了本地区域对比学习(LRCLR),这是一种灵活的微调框架,它添加了图像区域选择的重要层以及跨模态交互。我们在一个胸部X线验证集上的结果表明,LRCLR能够识别重要的本地图像区域,并在胸部X线医学发现的零样本效果上提高医学文献理解的有意义的解释性。