This paper presents Universal Vision-Language Dense Retrieval (UniVL-DR), which builds a unified model for multi-modal retrieval. UniVL-DR encodes queries and multi-modality resources in an embedding space for searching candidates from different modalities. To learn a unified embedding space for multi-modal retrieval, UniVL-DR proposes two techniques: 1) Universal embedding optimization strategy, which contrastively optimizes the embedding space using the modality-balanced hard negatives; 2) Image verbalization method, which bridges the modality gap between images and texts in the raw data space. UniVL-DR achieves the state-of-the-art on the multi-modal open-domain question answering benchmark, WebQA, and outperforms all retrieval models on the two subtasks, text-text retrieval and text-image retrieval. It demonstrates that universal multi-modal search is feasible to replace the divide-and-conquer pipeline with a united model and also benefits single/cross modality tasks. All source codes of this work are available at https://github.com/OpenMatch/UniVL-DR.
翻译:本文介绍通用视野-语言显示 ense Retrieval (UniVL-DR),该模式为多式检索构建了统一的模型。 UniVL-DR 将查询和多式资源编码在一个嵌入空间,用于从不同模式搜索候选人。要学习一个统一的嵌入空间,用于多式检索, UniVL-DR 提出了两种技术:(1) 通用嵌入优化战略,以不同方式平衡硬底片优化嵌入空间;(2) 图像语言表达方法,以弥合原始数据空间图像和文本之间的模式差距。 UniVL-DR 实现了多式开放式问题回答基准的状态,WebQA, 并超越了两个子任务、文本检索和文本图像检索的所有检索模式。它表明,通用的多式搜索是可行的,可以用统一模型取代分解式管道,也有利于单一/交叉模式任务。这项工作的所有源代码都可在 https://github.com/slop-Match/Ons-L。