Convolutional neural networks (CNNs) have shown outstanding performance on image denoising with the help of large-scale datasets. Earlier methods naively trained a single CNN with many pairs of clean-noisy images. However, the conditional distribution of the clean image given a noisy one is too complicated and diverse, so that a single CNN cannot well learn such distributions. Therefore, there have also been some methods that exploit additional noise level parameters or train a separate CNN for a specific noise level parameter. These methods separate the original problem into easier sub-problems and thus have shown improved performance than the naively trained CNN. In this step, we raise two questions. The first one is whether it is an optimal approach to relate the conditional distribution only to noise level parameters. The second is what if we do not have noise level information, such as in a real-world scenario. To answer the questions and provide a better solution, we propose a novel Bayesian framework based on the variational approximation of objective functions. This enables us to separate the complicated target distribution into simpler sub-distributions. Eventually, the denoising CNN can conquer noise from each sub-distribution, which is generally an easier problem than the original. Experiments show that the proposed method provides remarkable performance on additive white Gaussian noise (AWGN) and real-noise denoising while requiring fewer parameters than recent state-of-the-art denoisers.
翻译:在大型数据集的帮助下,摄制神经神经网络(CNNs)在图像脱色方面表现突出。早期的方法在大型数据集的帮助下,在图像脱色方面表现出了出色的表现。早期的方法对单一CNN进行了天真的训练,训练了一个配有许多干净的无声图像的单一CNN。然而,在一个吵闹的图像中,对清洁图像的有条件分布过于复杂和多样,因此单一CNN无法很好地了解这种分布。因此,也有一些方法利用额外的噪音水平参数或为特定的噪音水平参数培训一个单独的CNN。这些方法将原来的问题分为比较容易的子问题,从而显示比天真的CNN更好的业绩。最后,需要降低CNN的参数可以从每个令人瞩目的水平参数中找到一种最佳的方法。如果我们没有噪音水平的信息,例如在现实世界的情景中。为了回答问题和提供更好的解决办法,我们建议基于目标功能的变相近点的Bayesian框架。这使我们能够将复杂的目标分布分为更简单的次分配。我们提出两个问题。最后,需要降低CNNCN的参数可以从每个令人瞩目的升级的方法中比较容易地显示一个令人瞩目的的升级的方法。