Recent deep learning methods have achieved promising results in image shadow removal. However, their restored images still suffer from unsatisfactory boundary artifacts, due to the lack of degradation prior embedding and the deficiency in modeling capacity. Our work addresses these issues by proposing a unified diffusion framework that integrates both the image and degradation priors for highly effective shadow removal. In detail, we first propose a shadow degradation model, which inspires us to build a novel unrolling diffusion model, dubbed ShandowDiffusion. It remarkably improves the model's capacity in shadow removal via progressively refining the desired output with both degradation prior and diffusive generative prior, which by nature can serve as a new strong baseline for image restoration. Furthermore, ShadowDiffusion progressively refines the estimated shadow mask as an auxiliary task of the diffusion generator, which leads to more accurate and robust shadow-free image generation. We conduct extensive experiments on three popular public datasets, including ISTD, ISTD+, and SRD, to validate our method's effectiveness. Compared to the state-of-the-art methods, our model achieves a significant improvement in terms of PSNR, increasing from 31.69dB to 34.73dB over SRD dataset.
翻译:最近深层的学习方法在图像阴影清除方面取得了令人乐观的成果,然而,由于在建模能力方面缺乏前嵌入的退化和不足,其恢复的图像仍然受到无法令人满意的边界文物的影响。我们的工作通过提出一个统一的扩散框架,将图像和降解的前期结合起来,以便非常有效地清除阴影,从而解决这些问题。我们首先提出一个影子退化模型,激励我们建立一个新型的松动扩散模型,称为ShandowDiflution。它通过逐步改进理想的阴影清除能力,同时在之前的降解和之前的细微基因变异之间改进预期的产出,这在自然上可以作为恢复图像的新的强大基线。此外,阴影扩散模型逐渐完善估计的影子遮罩,作为扩散生成器的辅助任务,从而导致更准确和稳健的无阴影图像生成。我们对三种受欢迎的公共数据集,包括ISTD、ISTD+和SRD,进行了广泛的实验,以验证我们的方法的有效性。与目前采用的方法相比,我们的模型在PSM73数据方面实现了显著的改进,从31.69到SDRSd。