Face recognition has made significant progress in recent years due to deep convolutional neural networks (CNN). In many face recognition (FR) scenarios, face images are acquired from a sequence with huge intra-variations. These intra-variations, which are mainly affected by the low-quality face images, cause instability of recognition performance. Previous works have focused on ad-hoc methods to select frames from a video or use face image quality assessment (FIQA) methods, which consider only a particular or combination of several distortions. In this work, we present an efficient non-reference image quality assessment for FR that directly links image quality assessment (IQA) and FR. More specifically, we propose a new measurement to evaluate image quality without any reference. Based on the proposed quality measurement, we propose a deep Tiny Face Quality network (tinyFQnet) to learn a quality prediction function from data. We evaluate the proposed method for different powerful FR models on two classical video-based (or template-based) benchmark: IJB-B and YTF. Extensive experiments show that, although the tinyFQnet is much smaller than the others, the proposed method outperforms state-of-the-art quality assessment methods in terms of effectiveness and efficiency.
翻译:近些年来,由于神经神经网络(CNN)的深度演进,面部识别取得了显著进步。在许多面部识别(FR)情景中,脸部图像是从一个具有巨大内部差异的序列中获取的。这些内部差异主要受到低质量面部图像的影响,导致认知性表现不稳定。以前的工作重点是从视频中选择框架或使用面部图像质量评估(FIQA)方法,这些方法只考虑几种扭曲的特殊或组合。在这项工作中,我们为FR提供了高效的非参考图像质量评估,该评估直接将图像质量评估(IQA)和FR直接联系起来。更具体地说,我们提出了一个新的衡量标准,以评价图像质量质量,而没有任何参考。根据拟议的质量计量,我们提议了一个深小面面面面面面质量网络(tinyFQnet),从数据中学习质量预测功能。我们评估了两种经典视频(或模板)基准(IJB-B)和YTF。广泛的实验显示,尽管微FQnet在效率评估方面比其他方法要小得多,但方法的外形形样显示效率方法。