We provide a theoretical justification for sample recovery using diffusion based image inpainting in a linear model setting. While most inpainting algorithms require retraining with each new mask, we prove that diffusion based inpainting generalizes well to unseen masks without retraining. We analyze a recently proposed popular diffusion based inpainting algorithm called RePaint (Lugmayr et al., 2022), and show that it has a bias due to misalignment that hampers sample recovery even in a two-state diffusion process. Motivated by our analysis, we propose a modified RePaint algorithm we call RePaint$^+$ that provably recovers the underlying true sample and enjoys a linear rate of convergence. It achieves this by rectifying the misalignment error present in drift and dispersion of the reverse process. To the best of our knowledge, this is the first linear convergence result for a diffusion based image inpainting algorithm.
翻译:在线性模型设置中,我们为使用基于扩散的图像涂色来采集样本提供了一个理论理由。 虽然大多数涂色算法都需要用每个新面罩进行再培训,但我们证明,基于涂色的涂色算法无需再培训就能将普通面罩描绘成看不见的面罩。 我们分析了最近提出的一个基于大众涂色算法(RePaint)(Lugmayr等人,2022年)的涂色算法(Reugmayr等人,2022年),并表明它存在偏差,因为不匹配阻碍了样本的恢复,即使是在两个州的传播过程中也是如此。根据我们的分析,我们提出了修改的RePaint算法(RePaint $ $ ), 我们称之为 RePaint $, 以可明显恢复原始样本, 并享有线性趋同率。 我们通过纠正反向过程的漂移和分散过程中出现的不匹配错误来实现这一目标。 据我们所知,这是传播基于图像涂色算法的第一个线性合并结果。