In this paper, we propose a conceptually simple and geometrically interpretable objective function, i.e. additive margin Softmax (AM-Softmax), for deep face verification. In general, the face verification task can be viewed as a metric learning problem, so learning large-margin face features whose intra-class variation is small and inter-class difference is large is of great importance in order to achieve good performance. Recently, Large-margin Softmax and Angular Softmax have been proposed to incorporate the angular margin in a multiplicative manner. In this work, we introduce a novel additive angular margin for the Softmax loss, which is intuitively appealing and more interpretable than the existing works. We also emphasize and discuss the importance of feature normalization in the paper. Most importantly, our experiments on LFW BLUFR and MegaFace show that our additive margin softmax loss consistently performs better than the current state-of-the-art methods using the same network architecture and training dataset. Our code has also been made available at https://github.com/happynear/AMSoftmax
翻译:在本文中,我们提出了一个概念简单、可几何解释的客观功能, 即: 添加式软形边距( AM- Softmax), 用于深度面部校验。 一般来说, 面部核查任务可以被视为一个计量学习问题, 学习大型边际特征, 其阶级内部差异较小, 阶级间差异很大, 这对于取得良好业绩非常重要。 最近, 大边形软形和角软形软形功能被提议以多种复制方式将角边距纳入其中。 在这项工作中, 我们为软形损失引入了一种新颖的添加式角边际边距, 它直观吸引, 比现有作品更容易解释。 我们还在论文中强调并讨论地貌正常化的重要性。 最重要的是, 我们在LFW BLUFR和MegaFace的实验显示, 我们的添加式边距软体损失一直比使用同一网络架构和培训数据集的当前状态方法要好。 我们的代码还在 https://github.com/happyearMS/ Amax/ aface上提供。