In this study, we propose an enhancement to the similarity computation mechanism in multi-modal contrastive pretraining frameworks such as CLIP. Prior theoretical research has demonstrated that the optimal similarity metrics between paired modalities should correspond to the pointwise mutual information (PMI) between the two modalities. However, the current implementations of CLIP and its variants fail to fully utilize the underlying linear structure of PMI. We therefore propose KME-CLIP, which leverages this structure through the inner product in a reproducing kernel Hilbert space. We theoretically prove that our method can approximate PMI with arbitrary accuracy and empirically demonstrate that our approach overall outperforms the standard CLIP formulation across several retrieval and classification tasks.
翻译:本研究提出对多模态对比预训练框架(如CLIP)中相似度计算机制的改进方案。现有理论研究表明,配对模态间的最优相似度度量应等价于两模态间的逐点互信息(PMI)。然而,当前CLIP及其变体的实现未能充分利用PMI潜在的线性结构。为此,我们提出KME-CLIP方法,该方法通过再生核希尔伯特空间中的内积运算来利用该结构。我们从理论上证明了本方法能以任意精度逼近PMI,并通过多组检索与分类任务的实验验证了所提方法在整体性能上优于标准CLIP框架。