Face recognition has achieved revolutionary advancement owing to the advancement of the deep convolutional neural network (CNN). The central task of face recognition, including face verification and identification, involves face feature discrimination. However, traditional softmax loss of deep CNN usually lacks the power of discrimination. To address this problem, recently several loss functions such as central loss \cite{centerloss}, large margin softmax loss \cite{lsoftmax}, and angular softmax loss \cite{sphereface} have been proposed. All these improvement algorithms share the same idea: maximizing inter-class variance and minimizing intra-class variance. In this paper, we design a novel loss function, namely large margin cosine loss (LMCL), to realize this idea from a different perspective. More specifically, we reformulate the softmax loss as cosine loss by L2 normalizing both features and weight vectors to remove radial variation, based on which a cosine margin term \emph{$m$} is introduced to further maximize decision margin in angular space. As a result, minimum intra-class variance and maximum inter-class variance are achieved by normalization and cosine decision margin maximization. We refer to our model trained with LMCL as CosFace. To test our approach, extensive experimental evaluations are conducted on the most popular public-domain face recognition datasets such as MegaFace Challenge, Youtube Faces (YTF) and Labeled Face in the Wild (LFW). We achieve the state-of-the-art performance on these benchmark experiments, which confirms the effectiveness of our approach.
翻译:由于深层神经神经网络(CNN)的进步,面部识别(包括面部核查和识别)取得了革命性的进展。面部识别的核心任务包括面部识别,涉及面部特征歧视。然而,传统的深有CNN软体丢失通常缺乏歧视的力量。为了解决这一问题,最近出现了一些损失功能,例如中央损失\ cite{centerlossolth},大软体失差\ cite{lsoftmax},以及角软体形损失\cite{cite{sphereface}。所有这些改进算法都有着相同的理念:尽可能扩大阶级间差异和尽量减少阶级内部差异。在本文件中,我们设计了一个新的损失功能,即大差额连带损失(LMCLL),以便从不同的角度实现这个概念。更具体地说,我们重新配置软体系损失,通过L2的特性和重量矢量的矢量损失来消除辐射性损失。根据这个方法,一个直面比值术语术语定义{emph{m} 引入了相同的算法:在一个空间中进一步最大化决定。我们所理解的度差差差值。我们做了这样的实验中,我们做了最深层次判断。我们做了最深的内变数级判断,我们最深的数值比值,我们做了最深的数值检验了这些比值。