Recent methods for deep metric learning have been focusing on designing different contrastive loss functions between positive and negative pairs of samples so that the learned feature embedding is able to pull positive samples of the same class closer and push negative samples from different classes away from each other. In this work, we recognize that there is a significant semantic gap between features at the intermediate feature layer and class labels at the final output layer. To bridge this gap, we develop a contrastive Bayesian analysis to characterize and model the posterior probabilities of image labels conditioned by their features similarity in a contrastive learning setting. This contrastive Bayesian analysis leads to a new loss function for deep metric learning. To improve the generalization capability of the proposed method onto new classes, we further extend the contrastive Bayesian loss with a metric variance constraint. Our experimental results and ablation studies demonstrate that the proposed contrastive Bayesian metric learning method significantly improves the performance of deep metric learning in both supervised and pseudo-supervised scenarios, outperforming existing methods by a large margin.
翻译:最近深入的衡量学习方法一直侧重于设计正对和负对样本之间不同的对比损失功能,这样,所学特征的嵌入能够将同一类的正样提取出更近一些的正面样本,并将不同类的负面样本推离对方。在这项工作中,我们认识到中间特征层的特征与最后输出层的类标签之间存在显著的语义差距。为了缩小这一差距,我们开发了一种对比的贝叶斯分析,以辨别和模拟图像标签在对比性学习环境中的相似性,从而模拟其后生概率。这种对比性的贝叶斯分析为深层次的计量学习带来了新的损失功能。为了提高拟议方法的通用能力,我们进一步将对比性贝叶斯损失扩大到新类,并使用一个参数差异制约。我们的实验结果和对比性研究表明,拟议的对比性贝叶斯的衡量学习方法大大改进了在受监督和伪优视的情景下进行深度计量学习的绩效,大大优于现有方法。