In this article, we analyze how changing the underlying 3D shape of the base identity in face images can distort their overall appearance, especially from the perspective of deep face recognition. As done in popular training data augmentation schemes, we graphically render real and synthetic face images with randomly chosen or best-fitting 3D face models to generate novel views of the base identity. We compare deep features generated from these images to assess the perturbation these renderings introduce into the original identity. We perform this analysis at various degrees of facial yaw with the base identities varying in gender and ethnicity. Additionally, we investigate if adding some form of context and background pixels in these rendered images, when used as training data, further improves the downstream performance of a face recognition model. Our experiments demonstrate the significance of facial shape in accurate face matching and underpin the importance of contextual data for network training.
翻译:在文章中,我们分析了在脸部图像中改变基本身份的基本三维形状如何扭曲其总体外观,特别是从深刻的面部识别角度。正如在大众培训数据增强计划中所做的那样,我们用图形化地制作真实和合成的面部图像,使用随机选择或最适合的三维面部模型来生成对基本身份的新观点。我们比较这些图像产生的深层特征,以评估这些图像在原始身份中的扰动作用。我们在不同程度的面部擦拭中进行这种分析,其基础身份在性别和族裔方面各不相同。此外,我们调查这些生成的图像中是否添加了某种形式的上下文和背景像素,在用作培训数据时,进一步提高了面部识别模型的下游性能。我们的实验表明面部面部面部形状在准确匹配方面的重要性,并证实了背景数据在网络培训中的重要性。