Recently, deep convolution neural networks (CNNs) steered face super-resolution methods have achieved great progress in restoring degraded facial details by jointly training with facial priors. However, these methods have some obvious limitations. On the one hand, multi-task joint learning requires additional marking on the dataset, and the introduced prior network will significantly increase the computational cost of the model. On the other hand, the limited receptive field of CNN will reduce the fidelity and naturalness of the reconstructed facial images, resulting in suboptimal reconstructed images. In this work, we propose an efficient CNN-Transformer Cooperation Network (CTCNet) for face super-resolution tasks, which uses the multi-scale connected encoder-decoder architecture as the backbone. Specifically, we first devise a novel Local-Global Feature Cooperation Module (LGCM), which is composed of a Facial Structure Attention Unit (FSAU) and a Transformer block, to promote the consistency of local facial detail and global facial structure restoration simultaneously. Then, we design an efficient Feature Refinement Module (FRM) to enhance the encoded features. Finally, to further improve the restoration of fine facial details, we present a Multi-scale Feature Fusion Unit (MFFU) to adaptively fuse the features from different stages in the encoder procedure. Extensive evaluations on various datasets have assessed that the proposed CTCNet can outperform other state-of-the-art methods significantly. Source code will be available at https://github.com/IVIPLab/CTCNet.
翻译:最近,通过面部前端的联合培训,在恢复面部退化细节方面取得了巨大的进展。然而,这些方法有一些明显的局限性。一方面,多任务联合学习需要在数据集上加标记,而先前推出的网络将大大增加模型的计算成本。另一方面,CNN有限的可接受域将降低重建面部图像的准确性和自然性,从而导致图像重建不理想。在这项工作中,我们建议建立一个高效的CNN-Transfor defer合作网(CTCNet),用于面部超分辨率任务,将多尺度连接的编码-解码结构作为主干线。具体地说,我们首先设计了一个全新的本地-全球特征合作模块(LGCM),由结构注意小组(FSAU)和一个变形块组成,以促进当地面部详细建议和全球面部位结构重建的一致性。然后,我们设计一个高效的状态修整模块(FRMF),用于加强当前系统/FIFIF的配置特性。最后,我们首先设计了一个新的地方-FIFAF系统化程序,可以进一步改进目前各个FIFIFM/FIF格式的调整程序。