The softmax-based loss functions and its variants (e.g., cosface, sphereface, and arcface) significantly improve the face recognition performance in wild unconstrained scenes. A common practice of these algorithms is to perform optimizations on the multiplication between the embedding features and the linear transformation matrix. However in most cases, the dimension of embedding features is given based on traditional design experience, and there is less-studied on improving performance using the feature itself when giving a fixed size. To address this challenge, this paper presents a softmax approximation method called SubFace, which employs the subspace feature to promote the performance of face recognition. Specifically, we dynamically select the non-overlapping subspace features in each batch during training, and then use the subspace features to approximate full-feature among softmax-based loss, so the discriminability of the deep model can be significantly enhanced for face recognition. Comprehensive experiments conducted on benchmark datasets demonstrate that our method can significantly improve the performance of vanilla CNN baseline, which strongly proves the effectiveness of subspace strategy with the margin-based loss.
翻译:软负损失函数及其变体(如脸部、球面和弧面)大大改进了野生无限制场景的面部识别性能。这些算法的常见做法是优化嵌入特征和线性转换矩阵之间的倍增;然而,在大多数情况下,嵌入特征的维度是根据传统设计经验确定的,在给定大小时使用功能本身改进性能方面的研究较少。为了应对这一挑战,本文展示了一种软负负近似法,它使用子空间特征促进面部识别性能。具体地说,我们在培训期间动态地选择每批非重叠的子空间特征,然后使用子空间特征来估计软负载损失的完全性能,这样深层模型的不稳定性就可以大大增强面部识别能力。在基准数据集上进行的全面实验表明,我们的方法可以大大改进Vanilla CNN基线的性能,这有力地证明了子空间战略对基于边际损失的有效性。