Image-text matching is gaining a leading role among tasks involving the joint understanding of vision and language. In literature, this task is often used as a pre-training objective to forge architectures able to jointly deal with images and texts. Nonetheless, it has a direct downstream application: cross-modal retrieval, which consists in finding images related to a given query text or vice-versa. Solving this task is of critical importance in cross-modal search engines. Many recent methods proposed effective solutions to the image-text matching problem, mostly using recent large vision-language (VL) Transformer networks. However, these models are often computationally expensive, especially at inference time. This prevents their adoption in large-scale cross-modal retrieval scenarios, where results should be provided to the user almost instantaneously. In this paper, we propose to fill in the gap between effectiveness and efficiency by proposing an ALign And DIstill Network (ALADIN). ALADIN first produces high-effective scores by aligning at fine-grained level images and texts. Then, it learns a shared embedding space - where an efficient kNN search can be performed - by distilling the relevance scores obtained from the fine-grained alignments. We obtained remarkable results on MS-COCO, showing that our method can compete with state-of-the-art VL Transformers while being almost 90 times faster. The code for reproducing our results is available at https://github.com/mesnico/ALADIN.
翻译:图像- 文本匹配在涉及共同理解视觉和语言的任务中正在发挥带头作用。 在文献中, 这项任务常常被用作培训前的目标, 以构建能够共同处理图像和文本的结构。 然而, 它有一个直接的下游应用: 跨模式检索, 包括查找与特定查询文本或反之的图像。 解决这项任务在跨模式搜索引擎中至关重要。 许多最近的方法都提出了图像- 文本匹配问题的有效解决方案, 大多使用最近的大型视觉- 语言( VL) 变异器网络。 然而, 这些模型往往在计算上昂贵, 特别是在推断时间。 这阻碍了在大规模跨模式检索情景中采用这些模型, 其结果应该几乎瞬间提供给用户。 在本文中, 我们提议通过提议一个 Align 和 DIstill 网络(ALADIN) 来填补效力和效率之间的空白。 ALADIN 首先是通过调整精细的图像和文本来产生高效益的分数。 然后, 它从一个共享的嵌入空间中学习到一个高效的 kNNE 搜索, 能够进行大规模的跨模式的跨模式检索,, 将结果应用到几乎的 COMALL 格式搜索, 。 我们的DNA- 正在通过 正在展示的 ISO- 格式化的 进行 的 MADL 格式化 格式化 格式上我们 的 的排序 的 的 的 的 的 的 。