Contrastive learning is a powerful technique to learn representations that are semantically distinctive and geometrically invariant. While most of the earlier approaches have demonstrated its effectiveness on single-modality learning tasks such as image classification, recently there have been a few attempts towards extending this idea to multi-modal data. In this paper, we propose two loss functions based on normalized cross-entropy to perform the task of learning joint visual-semantic embedding using batch contrastive training. In a batch, for a given anchor point from one modality, we consider its negatives only from another modality, and define our first contrastive loss based on expected violations incurred by all the negatives. Next, we update this loss and define the second contrastive loss based on the violation incurred only by the hardest negative. We compare our results with existing visual-semantic embedding methods on cross-modal image-to-text and text-to-image retrieval tasks using the MS-COCO and Flickr30K datasets, where we outperform the state-of-the-art on the MS-COCO dataset and achieve comparable results on the Flickr30K dataset.
翻译:反向学习是一种强大的方法,用来学习在语义上与众不同和几何差异性的表达方式。 虽然大多数早期方法都展示了它在图像分类等单一模式学习任务上的有效性,但最近也有一些尝试试图将这一想法扩大到多模式数据。 在本文中,我们提出两个基于标准化的跨渗透性损失功能,以利用批量对比培训来完成学习联合视觉-思维嵌入的任务。 在一组中,对于一种模式的某个锁定点,我们只考虑从另一种模式的负值,并根据所有负值的预期违规情况来界定我们的第一个对比性损失。接下来,我们更新了这一损失,并根据仅最坏的违规情况来界定第二个对比性损失。我们比较了我们的结果与现有关于交叉现代图像到文字和文字到模拟的视觉嵌入方法,使用 MS-COCO 和 Flick30K数据集, 在那里我们超越了MS-CO CO 数据集的状态,并在Flick30K数据集上取得了可比的结果。