Self-supervised learning in vision-language processing exploits semantic alignment between imaging and text modalities. Prior work in biomedical VLP has mostly relied on the alignment of single image and report pairs even though clinical notes commonly refer to prior images. This does not only introduce poor alignment between the modalities but also a missed opportunity to exploit rich self-supervision through existing temporal content in the data. In this work, we explicitly account for prior images and reports when available during both training and fine-tuning. Our approach, named BioViL-T, uses a CNN-Transformer hybrid multi-image encoder trained jointly with a text model. It is designed to be versatile to arising challenges such as pose variations and missing input images across time. The resulting model excels on downstream tasks both in single- and multi-image setups, achieving state-of-the-art performance on (I) progression classification, (II) phrase grounding, and (III) report generation, whilst offering consistent improvements on disease classification and sentence-similarity tasks. We release a novel multi-modal temporal benchmark dataset, MS-CXR-T, to quantify the quality of vision-language representations in terms of temporal semantics. Our experimental results show the advantages of incorporating prior images and reports to make most use of the data.
翻译:在视觉语言处理过程中,自我监督的学习利用了成像和文本模式之间的语义一致性。生物医学VLP先前的工作主要依靠单一图像和报表配对的匹配,尽管临床说明通常是指以前的图像。这不仅造成模式之间不一致,而且错失了通过数据中现有的时间内容利用丰富的自我监督的丰富机会。在这项工作中,我们在培训和微调期间都可得到之前的图像和报告进行明确核算。我们的方法名为BioVil-T, 使用与文本模型共同培训的CNN-Transed混合多图像编码器。它的设计能够灵活应对新出现的挑战,例如长期造成变异和缺失的输入图像。所产生的模型在单图像和多图像组合中都优于下游任务,在(I) 进展分类、(II) 短语定位和(III) 报告生成过程中,同时在疾病分类和判决相似性任务方面提供一致的改进。我们发布了一个新的多模式基准数据集,MS-CXR-T,用于应对新出现的挑战,例如造成不同变异和缺失的输入图像图像。所产生的模式在单式和多图像的下游任务中优于将我们以往的图像的图像质量加以量化。</s>