Face inpainting aims at plausibly predicting missing pixels of face images within a corrupted region. Most existing methods rely on generative models learning a face image distribution from a big dataset, which produces uncontrollable results, especially with large-scale missing regions. To introduce strong control for face inpainting, we propose a novel reference-guided face inpainting method that fills the large-scale missing region with identity and texture control guided by a reference face image. However, generating high-quality results under imposing two control signals is challenging. To tackle such difficulty, we propose a dual control one-stage framework that decouples the reference image into two levels for flexible control: High-level identity information and low-level texture information, where the identity information figures out the shape of the face and the texture information depicts the component-aware texture. To synthesize high-quality results, we design two novel modules referred to as Half-AdaIN and Component-Wise Style Injector (CWSI) to inject the two kinds of control information into the inpainting processing. Our method produces realistic results with identity and texture control faithful to reference images. To the best of our knowledge, it is the first work to concurrently apply identity and component-level controls in face inpainting to promise more precise and controllable results. Code is available at https://github.com/WuyangLuo/RefFaceInpainting
翻译:面部涂鸦的目的是令人信服地预测在腐败区域中面部图像缺失的像素。 大多数现有方法都依赖于基因模型, 从大数据集中学习面部图像的分布, 从而产生无法控制的结果。 要对面面部涂漆进行强有力的控制, 我们提议一种创新的参考指导面部涂漆方法, 以一个参考面部图像为指南, 填充大规模缺失区域的身份和纹理控制。 然而, 在两个控制信号下生成高质量结果, 具有挑战性。 要解决这种困难, 我们提议一个双级控制一阶段框架, 将参考图像分解为两个级别, 以便灵活控制: 高级身份信息和低层次纹理信息, 其中身份信息显示面部的形状, 以及纹理信息的描述。 为了综合高质量的结果, 我们设计了两个新的模块, 称为半AdafinIN 和 组件样式输入器( CWCSI), 将两种类型的控制信息输入到纸部/ 处理中。 我们的方法将真实的身份信息结果应用到真实的图像控制级别上。</s>