Medical Visual Question Answering (Medical-VQA) aims to to answer clinical questions regarding radiology images, assisting doctors with decision-making options. Nevertheless, current Medical-VQA models learn cross-modal representations through residing vision and texture encoders in dual separate spaces, which lead to indirect semantic alignment. In this paper, we propose UnICLAM, a Unified and Interpretable Medical-VQA model through Contrastive Representation Learning with Adversarial Masking. Specifically, to learn an aligned image-text representation, we first establish a unified dual-stream pre-training structure with the gradually soft-parameter sharing strategy. Technically, the proposed strategy learns a constraint for the vision and texture encoders to be close in a same space, which is gradually loosened as the higher number of layers. Moreover, for grasping the unified semantic representation, we extend the adversarial masking data augmentation to the contrastive representation learning of vision and text in a unified manner. Concretely, while the encoder training minimizes the distance between original and masking samples, the adversarial masking module keeps adversarial learning to conversely maximize the distance. Furthermore, we also intuitively take a further exploration to the unified adversarial masking augmentation model, which improves the potential ante-hoc interpretability with remarkable performance and efficiency. Experimental results on VQA-RAD and SLAKE public benchmarks demonstrate that UnICLAM outperforms existing 11 state-of-the-art Medical-VQA models. More importantly, we make an additional discussion about the performance of UnICLAM in diagnosing heart failure, verifying that UnICLAM exhibits superior few-shot adaption performance in practical disease diagnosis.
翻译:医学视觉问答(Medical-VQA)旨在解答有关放射图像的临床问题,协助医生做出决策选择;然而,目前的医学VQA模型通过在双重独立空间的内置视觉和纹理编码器学习跨模式的表述,从而导致间接的语义一致。在本文中,我们建议UniICLAM(一个统一和可解释的医学-VQA模型)通过反向遮罩的对比代表性学习,具体地说,为了学习一致的图像文本代表,我们首先建立一个统一的双流培训前结构,同时采用逐渐软参数共享战略。技术上,拟议的战略学会通过在双独立空间里使用视觉和纹理编码编码来学习跨模式和掩码,从技术上讲,对于视觉和纹理的编码编码编码,从技术上讲,对于视觉和纹理的编码编码编码,从同一个空间里,逐渐松动;此外,为了掌握统一的语义,我们把对抗式掩码数据放大到对视觉和文字的对比性表述,具体地说,我们训练最大限度地缩小了原始和掩码质量的样本之间的距离,对QBA-al-alyal-al-adal-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al