Deep metric learning techniques have been used for visual representation in various supervised and unsupervised learning tasks through learning embeddings of samples with deep networks. However, classic approaches, which employ a fixed distance metric as a similarity function between two embeddings, may lead to suboptimal performance for capturing the complex data distribution. The Bregman divergence generalizes measures of various distance metrics and arises throughout many fields of deep metric learning. In this paper, we first show how deep metric learning loss can arise from the Bregman divergence. We then introduce a novel method for learning empirical Bregman divergence directly from data based on parameterizing the convex function underlying the Bregman divergence with a deep learning setting. We further experimentally show that our approach performs effectively on five popular public datasets compared to other SOTA deep metric learning methods, particularly for pattern recognition problems.
翻译:深层度量学习技术已经通过学习样本的嵌入来用于各种监督和无监督的学习任务中进行视觉表示。然而,使用固定距离度量作为两个嵌入之间的相似度函数的经典方法可能会导致捕捉复杂数据分布的性能不佳。Bregman散度泛化了各种距离度量的度量,并在许多深层度量学习领域中出现。在本文中,我们首先展示了深层度量学习损失如何由Bregman散度产生。然后,我们介绍了一种新的方法,通过使用一个深度学习设置来参数化Bregman散度下的凸函数,直接从数据中学习经验性Bregman散度。我们进一步在五个受欢迎的公共数据集上进行了实验证明,相比其他状态下的深层度量学习方法,我们的方法在模式识别问题中表现得更加有效。