Deep metric learning techniques have been used for visual representation in various supervised and unsupervised learning tasks through learning embeddings of samples with deep networks. However, classic approaches, which employ a fixed distance metric as a similarity function between two embeddings, may lead to suboptimal performance for capturing the complex data distribution. The Bregman divergence generalizes measures of various distance metrics and arises throughout many fields of deep metric learning. In this paper, we first show how deep metric learning loss can arise from the Bregman divergence. We then introduce a novel method for learning empirical Bregman divergence directly from data based on parameterizing the convex function underlying the Bregman divergence with a deep learning setting. We further experimentally show that our approach performs effectively on five popular public datasets compared to other SOTA deep metric learning methods, particularly for pattern recognition problems.
翻译:深度度量学习技术通过使用深度网络学习样本嵌入来用于各种监督和无监督学习任务中的视觉表示。然而,采用固定距离度量作为两个嵌入之间的相似性函数的经典方法可能导致捕捉复杂数据分布的次优性能。Bregman发散广义地概括了各种距离度量的度量方式并在深度度量学习的许多领域中出现。在本文中,我们首先展示了深度度量学习损失如何从Bregman发散中产生。然后,我们介绍了一种新的方法,用于直接从数据中学习经验Bregman发散,方法是基于用深度学习设置参数化Bregman发散下的凸函数。我们进一步通过实验证明,相对于其他SOTA深度度量学习方法,特别适用于模式识别问题,我们的方法在五个流行的公共数据集上表现出有效性。