Blind face restoration usually encounters with diverse scale face inputs, especially in the real world. However, most of the current works support specific scale faces, which limits its application ability in real-world scenarios. In this work, we propose a novel scale-aware blind face restoration framework, named FaceFormer, which formulates facial feature restoration as scale-aware transformation. The proposed Facial Feature Up-sampling (FFUP) module dynamically generates upsampling filters based on the original scale-factor priors, which facilitate our network to adapt to arbitrary face scales. Moreover, we further propose the facial feature embedding (FFE) module which leverages transformer to hierarchically extract diversity and robustness of facial latent. Thus, our FaceFormer achieves fidelity and robustness restored faces, which possess realistic and symmetrical details of facial components. Extensive experiments demonstrate that our proposed method trained with synthetic dataset generalizes better to a natural low quality images than current state-of-the-arts.
翻译:盲人面部恢复通常会遇到不同规模的面部输入, 特别是在现实世界中。 但是, 大部分当前作品支持特定规模的面部, 从而限制了其在现实世界情景中的应用能力。 在这项工作中, 我们提议了一个新型的有比例的盲人面部恢复框架, 名为FaceFormer, 将面部特征恢复作为有比例的变形。 拟议的面部特征提高抽样模块( FFUP) 动态生成了基于原始比例因素前缀的抽取过滤器, 从而方便了我们的网络适应任意面部缩放。 此外, 我们还进一步提议了面部嵌入模块( FFE), 将变异器用于从等级上提取面部潜伏的多样性和坚固性。 因此, 我们的面部Former 实现了真实性和稳健的恢复面部, 拥有真实和对称的面部组成部分细节。 广泛的实验表明, 我们所拟议的合成数据集培训的方法比当前状态更能概括出自然低质量图像。