Patch-based methods and deep networks have been employed to tackle image inpainting problem, with their own strengths and weaknesses. Patch-based methods are capable of restoring a missing region with high-quality texture through searching nearest neighbor patches from the unmasked regions. However, these methods bring problematic contents when recovering large missing regions. Deep networks, on the other hand, show promising results in completing large regions. Nonetheless, the results often lack faithful and sharp details that resemble the surrounding area. By bringing together the best of both paradigms, we propose a new deep inpainting framework where texture generation is guided by a texture memory of patch samples extracted from unmasked regions. The framework has a novel design that allows texture memory retrieval to be trained end-to-end with the deep inpainting network. In addition, we introduce a patch distribution loss to encourage high-quality patch synthesis. The proposed method shows superior performance both qualitatively and quantitatively on three challenging image benchmarks, i.e., Places, CelebA-HQ, and Paris Street-View datasets.
翻译:利用基于补丁的方法和深网络来解决图像涂料问题,并找出其自身的优缺点。基于补丁的方法能够通过搜索无孔区域最近的近邻补丁,恢复一个缺少的区域,其质质质质质质质。然而,这些方法在恢复大面积缺失区域时带来问题内容。另一方面,深网络在完成大片区域方面显示出有希望的结果。然而,结果往往缺乏与周围区域相似的忠实和尖锐的细节。通过汇集两种范例的最好之处,我们提出了一个新的深层涂料框架,使质质素生成以从无孔区域提取的补丁样本的纹理内存为指导。该框架有一个新设计,允许对质质质内存的检索进行与深层涂料网络的端到端培训。此外,我们引入了补丁分布损失,以鼓励高质量的补丁合成。拟议方法在三个具有挑战性的图像基准(即Places、CeebA-HQ和Paris Street-View data)上显示优劣性业绩。