The training scheme of deep face recognition has greatly evolved in the past years, yet it encounters new challenges in the large-scale data situation where massive and diverse hard cases occur. Especially in the range of low false accept rate (FAR), there are various hard cases in both positives (intra-class) and negatives (inter-class). In this paper, we study how to make better use of these hard samples for improving the training. The literature approaches this by margin-based formulation in either positive logit or negative logits. However, the correlation between hard positive and hard negative is overlooked, and so is the relation between the margins in positive and negative logits. We find such correlation is significant, especially in the large-scale dataset, and one can take advantage from it to boost the training via relating the positive and negative margins for each training sample. To this end, we propose an explicit collaboration between positive and negative margins sample-wisely. Given a batch of hard samples, a novel Negative-Positive Collaboration loss, named NPCFace, is formulated, which emphasizes the training on both negative and positive hard cases via the collaborative-margin mechanism in the softmax logits, and also brings better interpretation of negative-positive hardness correlation. Besides, the emphasis is implemented with an improved formulation to achieve stable convergence and flexible parameter setting. We validate the effectiveness of our approach on various benchmarks of large-scale face recognition, and obtain advantageous results especially in the low FAR range.
翻译:在过去几年里,深层面部识别培训计划发生了很大变化,但在大规模数据形势下,出现大规模和多种困难案例的大规模数据形势中遇到了新的挑战,特别是低虚假接受率(FAR)的范围(FAR),在正数(内层)和负数(内层)方面都存在各种困难案例;在本文件中,我们研究如何更好地利用这些硬抽样来改进培训;文献在正对账或负对账中通过基于边距的写法来处理这一点;然而,硬正数和硬数之间的关联被忽略了,正数和负数之间的边际关系也被忽视了。我们发现,这种关联很重要,特别是在大比例的接受率(FAR)方面,从中可以利用它来推动培训,将每类培训的正差(内层)和负数(内层)联系起来。为此,我们提议在正负差抽样中进行明确的协作。鉴于一系列硬数的样本,新式的负-偏差协作损失(NPCFace)被忽略了,而正数对正数和负数的偏差关系之间的关系也是如此。我们发现,通过协作性边际的基点机制,在软面进行更好的分析时,我们软面的硬比标的大幅分析时,更强调。