Face recognition has made extraordinary progress owing to the advancement of deep convolutional neural networks (CNNs). The central task of face recognition, including face verification and identification, involves face feature discrimination. However, the traditional softmax loss of deep CNNs usually lacks the power of discrimination. To address this problem, recently several loss functions such as center loss, large margin softmax loss, and angular softmax loss have been proposed. All these improved losses share the same idea: maximizing inter-class variance and minimizing intra-class variance. In this paper, we propose a novel loss function, namely large margin cosine loss (LMCL), to realize this idea from a different perspective. More specifically, we reformulate the softmax loss as a cosine loss by $L_2$ normalizing both features and weight vectors to remove radial variations, based on which a cosine margin term is introduced to further maximize the decision margin in the angular space. As a result, minimum intra-class variance and maximum inter-class variance are achieved by virtue of normalization and cosine decision margin maximization. We refer to our model trained with LMCL as CosFace. Extensive experimental evaluations are conducted on the most popular public-domain face recognition datasets such as MegaFace Challenge, Youtube Faces (YTF) and Labeled Face in the Wild (LFW). We achieve the state-of-the-art performance on these benchmarks, which confirms the effectiveness of our proposed approach.
翻译:由于深层神经神经网络(CNNs)的进步,面部认知取得了非同寻常的进展。面部认知的核心任务,包括面部识别和识别,涉及面部特征歧视。然而,深有CNN的传统的软体损失通常缺乏歧视的力量。为了解决这一问题,最近提出了几项损失功能,如中位损失、大边软体损失和角软体损失。所有这些改进后的亏损都有着相同的理念:尽可能扩大阶级间差异和尽量减少阶级内部差异。在本文中,我们提出一个新的损失函数,即大差值甘油损失(LMCLML),以便从不同的角度实现这一理念。更具体地说,我们把软体部损失重新定性为共值损失,同时将功能和重量矢量矢量变正常化,以消除辐射变异。在此基础上,引入了粘度差期,以进一步在角空间最大限度地扩大决策比值。因此,由于正常化和共同决策比值最大化,我们提出了新的损失函数差异最小值。我们提到,我们用MLMCFIFA模型来进行最深的面面度评估。我们用FIFIFIFA数据来进行这种模型,我们用FIFIFSDFSDMFMFD数据进行最深的模型来验证。