Training deep learning models on cardiac magnetic resonance imaging (CMR) can be a challenge due to the small amount of expert generated labels and inherent complexity of data source. Self-supervised contrastive learning (SSCL) has recently been shown to boost performance in several medical imaging tasks. However, it is unclear how much the pre-trained representation reflects the primary organ of interest compared to spurious surrounding tissue. In this work, we evaluate the optimal method of incorporating prior knowledge of anatomy into a SSCL training paradigm. Specifically, we evaluate using a segmentation network to explicitly local the heart in CMR images, followed by SSCL pretraining in multiple diagnostic tasks. We find that using a priori knowledge of anatomy can greatly improve the downstream diagnostic performance. Furthermore, SSCL pre-training with in-domain data generally improved downstream performance and more human-like saliency compared to end-to-end training and ImageNet pre-trained networks. However, introducing anatomic knowledge to pre-training generally does not have significant impact.
翻译:关于心脏磁共振成像(CMR)的深层培训模型可能是一项挑战,因为专家制作的标签数量很少,数据源也具有内在的复杂性。最近,自我监督的对比性学习(SSCL)显示,在几项医学成像任务中提高了绩效。然而,尚不清楚的是,与假造的周围组织相比,培训前的表述方式在多大程度上反映了主要关注器官。在这项工作中,我们评估了将先前的解剖学知识纳入SSCL培训模式的最佳方法。具体地说,我们评估了使用分解网络将心脏明确纳入CMR图像的地方,然后是SSCL在多个诊断任务中进行预先培训。我们发现,使用先验的解剖学知识可以大大改善下游诊断性能。此外,SSCLL在使用内部数据进行的培训前培训时,通常改进了下游性,并且与端培训和图像网络培训前的网络相比,人性更相似。然而,将解剖学知识引入培训前一般不会产生重大影响。