Image inpainting refers to the task of generating a complete, natural image based on a partially revealed reference image. Recently, many research interests have been focused on addressing this problem using fixed diffusion models. These approaches typically directly replace the revealed region of the intermediate or final generated images with that of the reference image or its variants. However, since the unrevealed regions are not directly modified to match the context, it results in incoherence between revealed and unrevealed regions. To address the incoherence problem, a small number of methods introduce a rigorous Bayesian framework, but they tend to introduce mismatches between the generated and the reference images due to the approximation errors in computing the posterior distributions. In this paper, we propose COPAINT, which can coherently inpaint the whole image without introducing mismatches. COPAINT also uses the Bayesian framework to jointly modify both revealed and unrevealed regions, but approximates the posterior distribution in a way that allows the errors to gradually drop to zero throughout the denoising steps, thus strongly penalizing any mismatches with the reference image. Our experiments verify that COPAINT can outperform the existing diffusion-based methods under both objective and subjective metrics. The codes are available at https://github.com/UCSB-NLP-Chang/CoPaint/.
翻译:图像修复是基于部分可见的参考图像生成完整自然图像的任务。最近,许多研究兴趣集中在使用固定扩散模型解决这个问题。这些方法通常直接将中间或最终生成的图像的可见区域替换为参考图像或其变体。然而,由于未显示区域没有直接修改以匹配上下文,因此会导致显示和未显示区域之间的不一致。为了解决一致性问题,一小部分方法引入了严格的贝叶斯框架,但由于计算后验分布的近似误差,它们往往会引入生成和参考图像之间的不匹配。在本文中,我们提出了COPAINT,它可以一致修补整个图像,而不引入不匹配。COPAINT还使用贝叶斯框架共同修改可见和不可见区域,但以一种方式逼近后验分布,使得误差在整个去噪步骤中逐渐降至零,从而严格惩罚与参考图像的任何不匹配。我们的实验验证了COPAINT在客观和主观指标下优于现有的基于扩散的方法。代码可以在https://github.com/UCSB-NLP-Chang/CoPaint/获取。