Image resolution, or in general, image quality, plays an essential role in the performance of today's face recognition systems. To address this problem, we propose a novel combination of the popular triplet loss to improve robustness against image resolution via fine-tuning of existing face recognition models. With octuplet loss, we leverage the relationship between high-resolution images and their synthetically down-sampled variants jointly with their identity labels. Fine-tuning several state-of-the-art approaches with our method proves that we can significantly boost performance for cross-resolution (high-to-low resolution) face verification on various datasets without meaningfully exacerbating the performance on high-to-high resolution images. Our method applied on the FaceTransformer network achieves 95.12% face verification accuracy on the challenging XQLFW dataset while reaching 99.73% on the LFW database. Moreover, the low-to-low face verification accuracy benefits from our method. We release our code to allow seamless integration of the octuplet loss into existing frameworks.
翻译:图像分辨率,或者一般而言,图像质量,在当今面部识别系统的运行中发挥着必不可少的作用。为了解决这一问题,我们提议将流行的三重损失进行新型组合,通过微调现有面部识别模型,提高图像分辨率的稳健性。随着奥氏体丢失,我们利用高分辨率图像及其合成下标变体与身份标签之间的关系。用我们的方法微调几种最先进的方法证明,我们可以大大提高交叉分辨率(高到低分辨率)的性能,同时在高至高分辨率图像的性能上进行不同数据集的核查。我们在面部变异网络上应用的方法在具有挑战性的 XQLFW 数据集上实现了95.12%的核查准确性,同时在LFW 数据库上达到了99.73%。此外,低到低面的核查准确性也从我们的方法中受益。我们发布我们的代码,以便能够将八项损失无缝地纳入现有框架。