Recent image inpainting methods have shown promising results due to the power of deep learning, which can explore external information available from the large training dataset. However, many state-of-the-art inpainting networks are still limited in exploiting internal information available in the given input image at test time. To mitigate this problem, we present a novel and efficient self-supervised fine-tuning algorithm that can adapt the parameters of fully pre-trained inpainting networks without using ground-truth target images. We update the parameters of the pre-trained state-of-the-art inpainting networks by utilizing existing self-similar patches (i.e., self-exemplars) within the given input image without changing the network architecture and improve the inpainting quality by a large margin. Qualitative and quantitative experimental results demonstrate the superiority of the proposed algorithm, and we achieve state-of-the-art inpainting results on publicly available benchmark datasets.
翻译:由于深层学习的力量,最近的图像油漆方法显示了令人乐观的结果,这种深层学习可以探索从大型培训数据集获得的外部信息,然而,许多最先进的油漆网络在利用测试时特定输入图像中的现有内部信息方面仍然有限。为了缓解这一问题,我们提出了一个新颖和高效的自我监督的微调算法,可以在不使用地面实况目标图像的情况下,调整经过充分预先训练的油漆网络的参数。我们通过在特定输入图像中利用现有的自相似部分(即自我油漆仪)更新了经过训练的先进油漆网络的参数,而不改变网络结构,以大幅度提高油漆质量。定性和定量实验结果显示了拟议算法的优势,我们在公开的基准数据集上取得了最新的最新油漆结果。