Recently, face super-resolution (FSR) methods either feed whole face image into convolutional neural networks (CNNs) or utilize extra facial priors (e.g., facial parsing maps, facial landmarks) to focus on facial structure, thereby maintaining the consistency of the facial structure while restoring facial details. However, the limited receptive fields of CNNs and inaccurate facial priors will reduce the naturalness and fidelity of the reconstructed face. In this paper, we propose a novel paradigm based on the self-attention mechanism (i.e., the core of Transformer) to fully explore the representation capacity of the facial structure feature. Specifically, we design a Transformer-CNN aggregation network (TANet) consisting of two paths, in which one path uses CNNs responsible for restoring fine-grained facial details while the other utilizes a resource-friendly Transformer to capture global information by exploiting the long-distance visual relation modeling. By aggregating the features from the above two paths, the consistency of global facial structure and fidelity of local facial detail restoration are strengthened simultaneously. Experimental results of face reconstruction and recognition verify that the proposed method can significantly outperform the state-of-the-art methods.
翻译:最近,面部超分辨率(FSR)方法要么将整张脸部图像注入进进进进进神经神经网络(CNNs),要么利用额外的面部前科(如面部剖面图、面部标志),关注面部结构,从而保持面部结构的一致性,同时恢复面部细节;然而,CNN的可接收字段有限,面部前科不准确,将降低重塑面部的自然性和真实性。在本文中,我们提议基于自我注意机制(即变形器的核心)的新范式,以充分探索面部结构特征的面部特征。具体地说,我们设计了一个由两条路径组成的变形器-CNN聚合网络(TANet),其中一条路径使用CNN负责恢复细微面部细节的路径,而另一条路径则使用资源友好型变形器,通过利用长距离视觉关系模型获取全球信息。通过将以上两条路径的特征汇总,全球面部结构的一致性和当地面部面部面部图的准确性修复的准确性得到了加强。 面部重建实验性结果和识别方法可以大大改变。