A regular convolution layer applying a filter in the same way over known and unknown areas causes visual artifacts in the inpainted image. Several studies address this issue with feature re-normalization on the output of the convolution. However, these models use a significant amount of learnable parameters for feature re-normalization, or assume a binary representation of the certainty of an output. We propose (layer-wise) feature imputation of the missing input values to a convolution. In contrast to learned feature re-normalization, our method is efficient and introduces a minimal number of parameters. Furthermore, we propose a revised gradient penalty for image inpainting, and a novel GAN architecture trained exclusively on adversarial loss. Our quantitative evaluation on the FDF dataset reflects that our revised gradient penalty and alternative convolution improves generated image quality significantly. We present comparisons on CelebA-HQ and Places2 to current state-of-the-art to validate our model.
翻译:对已知和未知区域应用过滤器的常规卷发层对已知和未知区域适用过滤器的常规卷发层,可导致成形图像中的视觉文物。一些研究探讨了这一问题,对卷发图像的特性进行了重新整顿。然而,这些模型使用大量可学习的特性重新整顿参数,或对产出的确定性进行二进制表示。我们建议(从层角度)对卷发中缺失的输入值进行估算。与所学的特性重新整顿相比,我们的方法是高效的,并引入了极少数量的参数。此外,我们建议修改图像涂抹梯度的梯度处罚,以及专门为对抗性损失而培训的新型GAN结构。我们对FDF数据集的定量评估表明,我们修改过的梯度罚款和替代性卷变率大大提高了图像质量。我们将CelebA-HQ和Places 2 与当前最先进的模型进行比较,以验证我们的模型。