Common image-text joint understanding techniques presume that images and the associated text can universally be characterized by a single implicit model. However, co-occurring images and text can be related in qualitatively different ways, and explicitly modeling it could improve the performance of current joint understanding models. In this paper, we train a Cross-Modal Coherence Modelfor text-to-image retrieval task. Our analysis shows that models trained with image--text coherence relations can retrieve images originally paired with target text more often than coherence-agnostic models. We also show via human evaluation that images retrieved by the proposed coherence-aware model are preferred over a coherence-agnostic baseline by a huge margin. Our findings provide insights into the ways that different modalities communicate and the role of coherence relations in capturing commonsense inferences in text and imagery.
翻译:共同的图像-文本共同理解技术假定图像和相关文本可普遍以单一隐含模式为特征。然而,共同的图像和文本可以用质量上的不同方式联系起来,明确建模可以改善当前共同理解模型的性能。在本文件中,我们为文本到图像的检索任务培训了一个跨模式一致性模型。我们的分析表明,经过图像-文本一致性关系培训的模型可以检索最初与目标文本配对的图像,而不是一致性-不可知性模型。我们还通过人类评估表明,通过一致性-认知模型获取的图像比一致性-不可知性基线有很大的优势更受青睐。我们的调查结果为不同模式的沟通方式以及一致性关系在获取文本和图像中常见的推论方面的作用提供了深刻的见解。