Deep learning has achieved outstanding performance for face recognition benchmarks, but performance reduces significantly for low resolution (LR) images. We propose an attention similarity knowledge distillation approach, which transfers attention maps obtained from a high resolution (HR) network as a teacher into an LR network as a student to boost LR recognition performance. Inspired by humans being able to approximate an object's region from an LR image based on prior knowledge obtained from HR images, we designed the knowledge distillation loss using the cosine similarity to make the student network's attention resemble the teacher network's attention. Experiments on various LR face related benchmarks confirmed the proposed method generally improved recognition performances on LR settings, outperforming state-of-the-art results by simply transferring well-constructed attention maps. The code and pretrained models are publicly available in the https://github.com/gist-ailab/teaching-where-to-look.
翻译:深层学习在面部识别基准方面取得了杰出的成绩,但在低分辨率图像方面业绩显著下降。我们建议采取关注相似的知识蒸馏方法,将作为教师的高分辨率网络获得的注意图转移到远程学习网络,作为学生提升远程识别性能的学生,将关注图转移到远程学习网络,从而提升远程识别性能。在人类能够根据先前从HR图像获得的知识而从物体区域图像中接近目标图像的启发下,我们设计了知识蒸馏性损失,利用对子网络的相似性使学生网络的注意类似于教师网络的注意。各种远程教学相关基准的实验确认了拟议的方法,即通过简单地传输精心构建的注意性能地图,普遍改善了远程教学环境中的识别性能,优于最新的结果。代码和预先培训的模式在https://github.com/gist-ailab/teaching-where-to-look中公开提供。