The face super-resolution (FSR) task is to reconstruct high-resolution face images from low-resolution inputs. Recent works have achieved success on this task by utilizing facial priors such as facial landmarks. Most existing methods pay more attention to global shape and structure information, but less to local texture information, which makes them cannot recover local details well. In this paper, we propose a novel recurrent convolutional network based framework for face super-resolution, which progressively introduces both global shape and local texture information. We take full advantage of the intermediate outputs of the recurrent network, and landmarks information and facial action units (AUs) information are extracted in the output of the first and second steps respectively, rather than low-resolution input. Moreover, we introduced AU classification results as a novel quantitative metric for facial details restoration. Extensive experiments show that our proposed method significantly outperforms state-of-the-art FSR methods in terms of image quality and facial details restoration.
翻译:面部超分辨率(FSR)的任务是从低分辨率输入中重建高分辨率面部图像。最近的工作通过使用面部前缀(如面部标志)成功地完成了这项任务。大多数现有方法更多地关注全球形状和结构信息,但较少关注本地纹理信息,这使得它们无法很好地恢复本地细节。在本文中,我们提出了一个基于面部超分辨率的新颖的反复演进网络框架,它逐渐引入全球形状和地方纹理信息。我们充分利用了经常性网络的中间输出,在第一和第二步骤的产出中分别提取了标志性信息和面部行动单位(AUs)的信息,而不是低分辨率输入。此外,我们引入了非盟分类结果,作为面部细节恢复的新型定量指标。广泛的实验表明,我们拟议的方法在图像质量和面部细节恢复方面大大优于最新FSR方法。