We tackle a challenging blind image denoising problem, in which only single distinct noisy images are available for training a denoiser, and no information about noise is known, except for it being zero-mean, additive, and independent of the clean image. In such a setting, which often occurs in practice, it is not possible to train a denoiser with the standard discriminative training or with the recently developed Noise2Noise (N2N) training; the former requires the underlying clean image for the given noisy image, and the latter requires two independently realized noisy image pair for a clean image. To that end, we propose GAN2GAN (Generated-Artificial-Noise to Generated-Artificial-Noise) method that first learns a generative model that can 1) simulate the noise in the given noisy images and 2) generate a rough, noisy estimates of the clean images, then 3) iteratively trains a denoiser with subsequently synthesized noisy image pairs (as in N2N), obtained from the generative model. In results, we show the denoiser trained with our GAN2GAN achieves an impressive denoising performance on both synthetic and real-world datasets for the blind denoising setting; it almost approaches the performance of the standard discriminatively-trained or N2N-trained models that have more information than ours, and it significantly outperforms the recent baseline for the same setting, \textit{e.g.}, Noise2Void, and a more conventional yet strong one, BM3D. The official code of our method is available at https://github.com/csm9493/GAN2GAN.
翻译:我们解决了一个挑战性失明的失明形象失色问题,在这个问题上,只有单一的杂音图像可以用来训练失色者,而且除了零度、添加度和独立于清洁图像之外,没有关于噪音的任何信息。在这种环境中,通常经常发生的情况是,我们不可能用标准的歧视性培训或最近开发的Nise2Noise(N2N)培训来训练失音者;前者要求给杂音图像提供基本的清洁图像,而后者则需要一个独立实现的无声图像。为此,我们提议GAN2GAN(General-Avictificial-Noficial-Nocial Noise)方法,首先学习一种染色模型,然后可以模拟在所提供噪音图像中出现的噪音,然后用最近开发的N2N2N2N2Nscial 模型(N2N2N2N2N), 后者需要两个独立实现的扰动图像配对。结果,我们用GAN2GAN2训练的脱色器比GANANANA(G)高,然后在最新的常规模型上实现一种令人印象的常规方法。