With the development of deep learning, Deep Metric Learning (DML) has achieved great improvements in face recognition. Specifically, the widely used softmax loss in the training process often bring large intra-class variations, and feature normalization is only exploited in the testing process to compute the pair similarities. To bridge the gap, we impose the intra-class cosine similarity between the features and weight vectors in softmax loss larger than a margin in the training step, and extend it from four aspects. First, we explore the effect of a hard sample mining strategy. To alleviate the human labor of adjusting the margin hyper-parameter, a self-adaptive margin updating strategy is proposed. Then, a normalized version is given to take full advantage of the cosine similarity constraint. Furthermore, we enhance the former constraint to force the intra-class cosine similarity larger than the mean inter-class cosine similarity with a margin in the exponential feature projection space. Extensive experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and IARPA Janus Benchmark A (IJB-A) datasets demonstrate that the proposed methods outperform the mainstream DML methods and approach the state-of-the-art performance.
翻译:随着深层学习的发展,深米学习(DML)在面部识别方面取得了巨大的进步。具体地说,在培训过程中广泛使用的软负损失往往带来巨大的阶级内部差异,而特性正常化只是在测试过程中加以利用,以计算对等的相似性。为了缩小差距,我们在软负负损失的特性和重量矢量之间建立了比培训步骤中差幅大得多的同类性,并将其从四个方面延伸开来。首先,我们探讨了硬质抽样采矿战略的效果。为了减轻调整边距超参数的人力劳动,提出了自我适应边距更新战略。然后,为充分利用对等的制约,提供了一种正常版本。此外,我们强化了以前的限制,迫使与平均的阶级间正弦相似性大于培训步骤中差幅,在指数地貌投影空间中也存在差幅。在野生拉贝面(LFW)、Youtube Face(YTF)和IARPA Janus基准 A (IJB-A) 和IMFMDMA 方法中的拟议业绩状态方法。