Pre-training vision-language models with contrastive objectives has shown promising results that are both scalable to large uncurated datasets and transferable to many downstream applications. Some following works have targeted to improve data efficiency by adding self-supervision terms, but inter-domain (image-text) contrastive loss and intra-domain (image-image) contrastive loss are defined on individual spaces in those works, so many feasible combinations of supervision are overlooked. To overcome this issue, we propose UniCLIP, a Unified framework for Contrastive Language-Image Pre-training. UniCLIP integrates the contrastive loss of both inter-domain pairs and intra-domain pairs into a single universal space. The discrepancies that occur when integrating contrastive loss between different domains are resolved by the three key components of UniCLIP: (1) augmentation-aware feature embedding, (2) MP-NCE loss, and (3) domain dependent similarity measure. UniCLIP outperforms previous vision-language pre-training methods on various single- and multi-modality downstream tasks. In our experiments, we show that each component that comprises UniCLIP contributes well to the final performance.
翻译:培训前的视觉-语言模型具有对比性目标,这些模型显示了令人乐观的结果,它们既可扩展至大型未精确的数据集,又可转让到许多下游应用,有些以下工作的目标是通过增加自我监督术语来提高数据效率,但不同领域之间(图像-文字)对比性损失和内部(图像-图像-图像)对比性损失的定义是在这些工程的单个空间上界定的,因此忽视了许多可行的监督组合。为了克服这一问题,我们提议UniCLIP,即对立语言-语言预科培训统一框架。UnCLIP将各部分之间和部分内部对口的对比性损失纳入一个单一的普遍空间。在将不同领域之间的对比性损失纳入统一CLIP的三个关键组成部分时,产生的差异:(1) 增强性特征嵌入,(2) MP-NCE损失,以及(3) 领域依赖性测量。UnicLIP在各种单项和多模式下游任务上,超越了以前的视觉-语言前培训方法。在我们的实验中,我们显示每个组成部分都为UCLIP的最后性贡献。