Face recognition has been an active and vital topic among computer vision community for a long time. Previous researches mainly focus on loss functions used for facial feature extraction network, among which the improvements of softmax-based loss functions greatly promote the performance of face recognition. However, the contradiction between the drastically increasing number of face identities and the shortage of GPU memories is gradually becoming irreconcilable. In this paper, we thoroughly analyze the optimization goal of softmax-based loss functions and the difficulty of training massive identities. We find that the importance of negative classes in softmax function in face representation learning is not as high as we previously thought. The experiment demonstrates no loss of accuracy when training with only 10\% randomly sampled classes for the softmax-based loss functions, compared with training with full classes using state-of-the-art models on mainstream benchmarks. We also implement a very efficient distributed sampling algorithm, taking into account model accuracy and training efficiency, which uses only eight NVIDIA RTX2080Ti to complete classification tasks with tens of millions of identities. The code of this paper has been made available https://github.com/deepinsight/insightface/tree/master/recognition/partial_fc.
翻译:长期以来,在计算机视觉界中,对面部特征提取网络使用的损失功能是一个积极和重要的话题。以往的研究主要侧重于面部特征提取网络使用的损失功能,其中软式最大损失功能的改进极大地促进了面部识别的绩效。然而,面部特征数量急剧增加与GPU记忆短缺之间的矛盾正在逐渐变得无法调和。在本文件中,我们透彻分析软式最大损失功能的优化目标以及培训大规模身份的难度。我们发现,在面部代表学习中软式功能的负级的重要性不如我们以前想象的那么高。在仅随机抽样10个软式损失功能类的培训中,与使用主流基准的最新模型全类培训相比,实验显示的准确性不会降低。我们还实施了非常高效的分布式抽样算法,其中考虑到模型准确性和培训效率,只有8个NVIDIA RTX2080Ti 用于完成数以百万计身份的分类任务。本文的代码已经提供 https://github.com/epreadinsmair/ight/indexignal/ight。