Prior probability models are a fundamental component of many image processing problems, but density estimation is notoriously difficult for high-dimensional signals such as photographic images. Deep neural networks have provided state-of-the-art solutions for problems such as denoising, which implicitly rely on a prior probability model of natural images. Here, we develop a robust and general methodology for making use of this implicit prior. We rely on a little-known statistical result due to Miyasawa (1961), who showed that the least-squares solution for removing additive Gaussian noise can be written directly in terms of the gradient of the log of the noisy signal density. We use this fact to develop a stochastic coarse-to-fine gradient ascent procedure for drawing high-probability samples from the implicit prior embedded within a CNN trained to perform blind (i.e., unknown noise level) least-squares denoising. A generalization of this algorithm to constrained sampling provides a method for using the implicit prior to solve any linear inverse problem, with no additional training. We demonstrate this general form of transfer learning in multiple applications, using the same algorithm to produce high-quality solutions for deblurring, super-resolution, inpainting, and compressive sensing.
翻译:先前的概率模型是许多图像处理问题的一个基本组成部分,但对于像照片图像这样的高维信号来说,密度估计是十分困难的。深神经网络提供了最先进的解决方案,解决诸如除去等问题,这隐含地依赖自然图像的先前概率模型。在这里,我们为使用这一隐含的先前图像开发了一种稳健和一般的方法。我们依赖由于Miyasawa(1961年)造成的一个鲜为人知的统计结果。Miyasawa(1961年)显示,去除添加高斯噪音的最小平方解决方案可以直接用噪音密度信号密度的日志梯度来书写。我们利用这个事实来开发一种最先进的脱去高偏向梯度的解决方案。我们用这个事实来开发一种从受过盲目(即未知的噪声水平)之前嵌入CNN的隐含的高概率样本来绘制高概率样本的精度程序。我们用这个事实来开发这种一般的传输形式,用于多种应用中,使用同样的测算法来制作高品质的分辨率、高分辨率的压解算法。