Multimodal learning is a recent challenge that extends unimodal learning by generalizing its domain to diverse modalities, such as texts, images, or speech. This extension requires models to process and relate information from multiple modalities. In Information Retrieval, traditional retrieval tasks focus on the similarity between unimodal documents and queries, while image-text retrieval hypothesizes that most texts contain the scene context from images. This separation has ignored that real-world queries may involve text content, image captions, or both. To address this, we introduce Multimodal Retrieval on Representation of ImaGe witH Text (Mr. Right), a novel and comprehensive dataset for multimodal retrieval. We utilize the Wikipedia dataset with rich text-image examples and generate three types of text-based queries with different modality information: text-related, image-related, and mixed. To validate the effectiveness of our dataset, we provide a multimodal training paradigm and evaluate previous text retrieval and image retrieval frameworks. The results show that proposed multimodal retrieval can improve retrieval performance, but creating a well-unified document representation with texts and images is still a challenge. We hope Mr. Right allows us to broaden current retrieval systems better and contributes to accelerating the advancement of multimodal learning in the Information Retrieval.
翻译:多种形式的学习是最近一项挑战,它通过将其领域推广到文本、图像或语言等多种模式,将单式学习推广到多种模式,例如文本、图像或语言等,从而将单式学习推广到不同的领域。这一扩展要求有处理模式和从多种模式获取信息的模型。在信息检索中,传统的检索任务侧重于单式文件和查询之间的相似性,而图像-文字检索假设多数文本包含图像的场景背景。这一分离忽视了现实世界查询可能涉及文字内容、图像说明或两者兼而有之。为了解决这个问题,我们引入了“关于ImaGe Whie HookH Text的表示方式(Right先生)”的多式检索,这是一个用于多式联运检索的新颖而全面的数据集。我们使用维基百科数据集,拥有丰富的文本图像示例,产生三种基于文本的查询类型,而不同模式的信息是:文本相关、图像相关和混合的。为了验证我们的数据集的有效性,我们提供了一种多式联运培训模式,并评价以前的文本检索和图像检索框架。结果显示,拟议的多式联运检索可以改进检索工作,但创建一个与文本和图像相容一致的文件代表仍然是一项挑战。我们希望先生能够加速检索。