Image-text multimodal representation learning aligns data across modalities and enables important medical applications, e.g., image classification, visual grounding, and cross-modal retrieval. In this work, we establish a connection between multimodal representation learning and multiple instance learning. Based on this connection, we propose a generic framework for constructing permutation-invariant score functions with many existing multimodal representation learning approaches as special cases. Furthermore, we use the framework to derive a novel contrastive learning approach and demonstrate that our method achieves state-of-the-art results in several downstream tasks.
翻译:图像文本多式代表制学习将不同模式的数据统一起来,并能够进行重要的医疗应用,例如图像分类、视觉定位和跨模式检索。在这项工作中,我们在多式代表制学习和多实例学习之间建立了联系。基于这一联系,我们提出了一个通用框架,用以构建变异性分数功能,许多现有的多式代表制学习方法作为特例。此外,我们利用这个框架来形成一种新颖的对比式学习方法,并表明我们的方法在一些下游任务中取得了最先进的成果。</s>