We propose a novel distance-based regularization method for deep metric learning called Multi-level Distance Regularization (MDR). MDR explicitly disturbs a learning procedure by regularizing pairwise distances between embedding vectors into multiple levels that represents a degree of similarity between a pair. In the training stage, the model is trained with both MDR and an existing loss function of deep metric learning, simultaneously; the two losses interfere with the objective of each other, and it makes the learning process difficult. Moreover, MDR prevents some examples from being ignored or overly influenced in the learning process. These allow the parameters of the embedding network to be settle on a local optima with better generalization. Without bells and whistles, MDR with simple Triplet loss achieves the-state-of-the-art performance in various benchmark datasets: CUB-200-2011, Cars-196, Stanford Online Products, and In-Shop Clothes Retrieval. We extensively perform ablation studies on its behaviors to show the effectiveness of MDR. By easily adopting our MDR, the previous approaches can be improved in performance and generalization ability.
翻译:我们提出了一种新型的远程正规化(MDR)方法,用于深度衡量学习。MDR明确扰乱了学习程序,将嵌入矢量之间的双向距离固定在代表一对相之间某种程度相似的多层次上。在培训阶段,模型既接受MDR培训,又同时接受深层次衡量学习的现有损失功能;两种损失相互干扰,使学习过程难于进行。此外,MDR防止一些实例在学习过程中被忽视或过度影响。这使得嵌入网络的参数能够以更好的通用化方式安顿在本地的opima上。如果没有钟声和哨声,简单的Triplet损失的MDR在各种基准数据集(CUB-200-2011、Cars-196、斯坦福在线产品和In-Shop Clothes Retrieval)中达到最先进的性能。我们对其行为进行了广泛的反动研究,以显示MDR的有效性。通过易于采用我们的MDR,以往的方法可以在业绩和普及能力方面得到改进。