In this study, we introduce a feature knowledge distillation framework to improve low-resolution (LR) face recognition performance using knowledge obtained from high-resolution (HR) images. The proposed framework transfers informative features from an HR-trained network to an LR-trained network by reducing the distance between them. A cosine similarity measure was employed as a distance metric to effectively align the HR and LR features. This approach differs from conventional knowledge distillation frameworks, which use the L_p distance metrics and offer the advantage of converging well when reducing the distance between features of different resolutions. Our framework achieved a 3% improvement over the previous state-of-the-art method on the AgeDB-30 benchmark without bells and whistles, while maintaining a strong performance on HR images. The effectiveness of cosine similarity as a distance metric was validated through statistical analysis, making our approach a promising solution for real-world applications in which LR images are frequently encountered. The code and pretrained models will be publicly available on GitHub.
翻译:在这项研究中,我们引入了一个特征知识蒸馏框架,利用高分辨率图像获得的知识提高低分辨率(LR)面部识别性业绩。拟议框架通过缩短彼此之间的距离,将人力资源培训网络的信息性能从人力资源培训网络转移到LR培训网络的信息性能通过缩短它们之间的距离。使用类似测量作为远距离测量,以有效地协调人力资源和LR特性。这一方法不同于传统的知识提炼框架,传统知识提炼框架使用L_p距离测量标准,在缩小不同分辨率特征之间的距离时提供了很好融合的优势。我们的框架比以前关于AgeDB-30基准的先进方法改进了3%,没有钟声和哨,同时保持了对人力资源图像的有力性能。通过统计分析,验证了作为远程测量的相似性能,使我们的方法成为现实应用中经常遇到LR图像的有希望的解决办法。代码和预先培训的模式将在GitHub上公布。</s>