Relation-focused cross-modal information retrieval focuses on retrieving information based on relations expressed in user queries, and it is particularly important in information retrieval applications and next-generation search engines. While pre-trained networks like Contrastive Language-Image Pre-training (CLIP) have achieved state-of-the-art performance in cross-modal learning tasks, the Vision Transformer (ViT) used in these networks is limited in its ability to focus on image region relations. Specifically, ViT is trained to match images with relevant descriptions at the global level, without considering the alignment between image regions and descriptions. This paper introduces VITR, a novel network that enhances ViT by extracting and reasoning about image region relations based on a Local encoder. VITR comprises two main components: (1) extending the capabilities of ViT-based cross-modal networks to extract and reason with region relations in images; and (2) aggregating the reasoned results with the global knowledge to predict the similarity scores between images and descriptions. Experiments were carried out by applying the proposed network to relation-focused cross-modal information retrieval tasks on the Flickr30K, RefCOCOg, and CLEVR datasets. The results revealed that the proposed VITR network outperformed various other state-of-the-art networks including CLIP, VSE$\infty$, and VSRN++ on both image-to-text and text-to-image cross-modal information retrieval tasks.
翻译:暂无翻译