Deep learning-based image inpainting algorithms have shown great performance via powerful learned prior from the numerous external natural images. However, they show unpleasant results on the test image whose distribution is far from the that of training images because their models are biased toward the training images. In this paper, we propose a simple image inpainting algorithm with test-time adaptation named AdaFill. Given a single out-of-distributed test image, our goal is to complete hole region more naturally than the pre-trained inpainting models. To achieve this goal, we treat remained valid regions of the test image as another training cues because natural images have strong internal similarities. From this test-time adaptation, our network can exploit externally learned image priors from the pre-trained features as well as the internal prior of the test image explicitly. Experimental results show that AdaFill outperforms other models on the various out-of-distribution test images. Furthermore, the model named ZeroFill, that are not pre-trained also sometimes outperforms the pre-trained models.
翻译:深层次的基于学习的图像油漆算法通过在从众多外部自然图像之前获得的强大经验表现出了巨大的性能。 然而,它们却在测试图像上表现出令人不快的结果,而测试图像的分布与培训图像的分布相去甚远,因为其模型偏向于培训图像。 在本文中,我们提出了一个简单的图像油漆算法,测试时间的适应性为Ada Finll。鉴于一个单一的分布式测试图像,我们的目标是完成洞区比预先培训的绘制模型更自然。为了实现这一目标,我们把测试图像的仍然有效的区域当作另一个培训提示,因为自然图像具有很强的内部相似性。从这一测试时间适应中,我们的网络可以利用从预培训的特征以及测试图像的内部前期所学的外部图像。实验结果显示,Ada Fiell在各种外分配测试图像上优于其他模型。此外,名为ZeroFill的模型没有事先培训过,有时也优于培训前的模型。