Over the last few years, deep learning techniques have yielded significant improvements in image inpainting. However, many of these techniques fail to reconstruct reasonable structures as they are commonly over-smoothed and/or blurry. This paper develops a new approach for image inpainting that does a better job of reproducing filled regions exhibiting fine details. We propose a two-stage adversarial model EdgeConnect that comprises of an edge generator followed by an image completion network. The edge generator hallucinates edges of the missing region (both regular and irregular) of the image, and the image completion network fills in the missing regions using hallucinated edges as a priori. We evaluate our model end-to-end over the publicly available datasets CelebA, Places2, and Paris StreetView, and show that it outperforms current state-of-the-art techniques quantitatively and qualitatively.
翻译:过去几年来,深层学习技术在图像涂色方面取得了显著的改善,然而,许多这些技术未能重建合理的结构,因为它们通常被过度吸附和/或模糊不清。本文开发了一种新的图像涂色方法,在复制填充区域时可以更好地复制精细细节。我们建议了由边缘生成器和图像完成网络组成的两阶段对抗模型EgeConnect。 图像缺失区域的边缘产生幻觉的边缘(常规和非常规的),以及图像完成网络在遗漏区域以拉风边缘作为前奏填充。我们评估了我们模型的端对端,在公开可用的数据集CelebA、Places2和Paris StreetView上进行,并显示它超越了当前在数量和质量上的最新技术。