Contrastive learning is a form of self-supervision that can leverage unlabeled data to produce pretrained models. While contrastive learning has demonstrated promising results on natural image classification tasks, its application to medical imaging tasks like chest X-ray interpretation has been limited. In this work, we propose MoCo-CXR, which is an adaptation of the contrastive learning method Momentum Contrast (MoCo), to produce models with better representations and initializations for the detection of pathologies in chest X-rays. In detecting pleural effusion, we find that linear models trained on MoCo-CXR-pretrained representations outperform those without MoCo-CXR-pretrained representations, indicating that MoCo-CXR-pretrained representations are of higher-quality. End-to-end fine-tuning experiments reveal that a model initialized via MoCo-CXR-pretraining outperforms its non-MoCo-CXR-pretrained counterpart. We find that MoCo-CXR-pretraining provides the most benefit with limited labeled training data. Finally, we demonstrate similar results on a target Tuberculosis dataset unseen during pretraining, indicating that MoCo-CXR-pretraining endows models with representations and transferability that can be applied across chest X-ray datasets and tasks.
翻译:对比性学习是一种自我监督的学习形式,它能够利用未贴标签的数据来制作经过培训的模型。对比性学习在自然图像分类任务方面显示了有希望的成果,但对于胸前X光解释等医疗成像任务的应用有限。在这项工作中,我们提议采用Moco-CXR,这是对对比性学习方法“动态对比”(MoCo)的调整,以产生模型,在检查胸前X光检查病理方面,具有更好的表现和初始化能力。在检测胸前X光检查时,我们发现,在MoC-CXR预先培训的表象模型比没有部-CXR预先培训的表象具有最大的效益,表明MoC-CXR预先培训的表象质量较高。最后,我们展示了通过MOC-CXR前培训模型开始的模型,在MC-CS-CARE测试前的模像性数据转换过程中,我们展示了类似的结果。