Recently, the cross-modal pre-training task has been a hotspot because of its wide application in various down-streaming researches including retrieval, captioning, question answering and so on. However, exiting methods adopt a one-stream pre-training model to explore the united vision-language representation for conducting cross-modal retrieval, which easily suffer from the calculation explosion. Moreover, although the conventional double-stream structures are quite efficient, they still lack the vital cross-modal interactions, resulting in low performances. Motivated by these challenges, we put forward a Contrastive Cross-Modal Knowledge Sharing Pre-training (COOKIE) to grasp the joint text-image representations. Structurally, COOKIE adopts the traditional double-stream structure because of the acceptable time consumption. To overcome the inherent defects of double-stream structure as mentioned above, we elaborately design two effective modules. Concretely, the first module is a weight-sharing transformer that builds on the head of the visual and textual encoders, aiming to semantically align text and image. This design enables visual and textual paths focus on the same semantics. The other one is three specially designed contrastive learning, aiming to share knowledge between different models. The shared cross-modal knowledge develops the study of unimodal representation greatly, promoting the single-modal retrieval tasks. Extensive experimental results on multi-modal matching researches that includes cross-modal retrieval, text matching, and image retrieval reveal the superiors in calculation efficiency and statistical indicators of our pre-training model.
翻译:最近,跨模式培训前的任务一直是一个热点,因为它广泛应用于各种下游研究,包括检索、说明、问答等。然而,退出的方法采用一流的预培训模式,以探索进行跨模式检索的统一愿景语言代表模式,这很容易受到计算爆炸的影响。此外,尽管传统的双流结构相当高效,但它们仍然缺乏重要的跨模式互动,导致低性能。在这些挑战的驱动下,我们推出了一个对比跨模式知识共享预培训(COOKIE),以掌握联合的文本模拟表征。结构上,COOKIE采用传统的双流结构,因为可以接受时间消耗。为了克服上述双流结构的固有缺陷,我们精心设计了两个有效的模块。具体地说,第一个模块是一个权重共享的转换器,它建立在视觉和文字读取模型的首端上,目的是对文字和图像进行拼凑。这一设计使视觉和文字的路径能够聚焦于共同的文本模拟代表模式上。另一个特别设计了共同的跨模式的学习模型,目的是在共同的跨模式上发展。