Computer vision models for image quality assessment (IQA) predict the subjective effect of generic image degradation, such as artefacts, blurs, bad exposure, or colors. The scarcity of face images in existing IQA datasets (below 10\%) is limiting the precision of IQA required for accurately filtering low-quality face images or guiding CV models for face image processing, such as super-resolution, image enhancement, and generation. In this paper, we first introduce the largest annotated IQA database to date that contains 20,000 human faces (an order of magnitude larger than all existing rated datasets of faces), of diverse individuals, in highly varied circumstances, quality levels, and distortion types. Based on the database, we further propose a novel deep learning model, which re-purposes generative prior features for predicting subjective face quality. By exploiting rich statistics encoded in well-trained generative models, we obtain generative prior information of the images and serve them as latent references to facilitate the blind IQA task. Experimental results demonstrate the superior prediction accuracy of the proposed model on the face IQA task.
翻译:用于图像质量评估的计算机视觉模型(IQA)预测一般图像退化的主观效应,如手工艺品、模糊、曝光率差或颜色等。现有IQA数据集(小于10 ⁇ )中面部图像稀缺限制了准确过滤低质量面部图像所需的IQA精确度,或指导用于面部图像处理的CV模型,如超分辨率、图像增强和生成。在本文中,我们首先引入了迄今为止最大的附加说明的IQA数据库,该数据库包含20,000个不同个人在高度不同的情况下、质量水平和扭曲类型下面部(比所有现有评级的面部数据集大一个数量级级)的面部。我们进一步提议了一个新的深层次学习模型,该模型将原型的基因特征用于预测主观面部质量。我们通过利用经过良好训练的基因化模型编码的丰富统计数据,获得这些图像的基因化先前信息,并用作促进盲面部IQA任务的潜在参考。实验结果显示,拟议模型在面部任务上预测的精确度较高。