The quality of face images significantly influences the performance of underlying face recognition algorithms. Face image quality assessment (FIQA) estimates the utility of the captured image in achieving reliable and accurate recognition performance. In this work, we propose a novel learning paradigm that learns internal network observations during the training process. Based on that, our proposed CR-FIQA uses this paradigm to estimate the face image quality of a sample by predicting its relative classifiability. This classifiability is measured based on the allocation of the training sample feature representation in angular space with respect to its class center and the nearest negative class center. We experimentally illustrate the correlation between the face image quality and the sample relative classifiability. As such property is only observable for the training dataset, we propose to learn this property from the training dataset and utilize it to predict the quality measure on unseen samples. This training is performed simultaneously while optimizing the class centers by an angular margin penalty-based softmax loss used for face recognition model training. Through extensive evaluation experiments on eight benchmarks and four face recognition models, we demonstrate the superiority of our proposed CR-FIQA over state-of-the-art (SOTA) FIQA algorithms.
翻译:脸部图像质量评估(FIQA)估计了所拍摄图像在实现可靠和准确的识别性业绩方面的实用性。在这项工作中,我们提出了一个在培训过程中学习内部网络观测的新学习模式。在此基础上,我们提议的CR-FIQA利用这一模式来估计样本的面部图像质量,方法是预测其相对可分类性。这种可分类性是根据在角空间对培训样本特征的描述分配而测量的,相对于其班级中心和最近的负级中心而言。我们实验性地展示了所拍摄图像质量与抽样相对可分类性之间的关联性。由于这种属性只是对培训数据集的观察,我们建议从培训数据集中学习这一属性,并利用它来预测对未见样品的质量测量。这一培训同时通过一个以角边际为基础的软体轴损失来优化教室中心,用于面部识别模型培训。我们通过对八个基准和四个面部识别模型的广泛评估实验,展示了我们提议的CR-FIQAA相对于州-艺术算法的优势。</s>