Deep Metric Learning algorithms aim to learn an efficient embedding space to preserve the similarity relationships among the input data. Whilst these algorithms have achieved significant performance gains across a wide plethora of tasks, they have also failed to consider and increase comprehensive similarity constraints; thus learning a sub-optimal metric in the embedding space. Moreover, up until now; there have been few studies with respect to their performance in the presence of noisy labels. Here, we address the concern of learning a discriminative deep embedding space by designing a novel, yet effective Deep Class-wise Discrepancy Loss (DCDL) function that segregates the underlying similarity distributions (thus introducing class-wise discrepancy) of the embedding points between each and every class. Our empirical results across three standard image classification datasets and two fine-grained image recognition datasets in the presence and absence of noise clearly demonstrate the need for incorporating such class-wise similarity relationships along with traditional algorithms while learning a discriminative embedding space.
翻译:深米学习算法旨在学习一个高效的嵌入空间,以维护输入数据之间的相似关系。 虽然这些算法在众多任务中取得了显著的绩效收益,但它们也未能考虑和增加全面的相似性制约;因此在嵌入空间中学习了一个亚最佳的度量。 此外,直到现在,在出现吵闹标签的情况下,对于它们的性能研究很少;在这里,我们通过设计一个新颖的、但有效的深密类差异损失(DCDL)功能来解决学习一个歧视性的深层嵌入空间的关切,该功能将每个类别和每个类别之间嵌入点的基本相似性分布(从而引入等级差异 ) 分离起来。 我们通过三个标准图像分类数据集和两个微微分化图像识别数据集的经验结果清楚地表明,在出现和没有噪音的情况下,需要将这类等级相似性关系与传统算法相结合,同时学习有区别的嵌入空间。