Classic image-restoration algorithms use a variety of priors, either implicitly or explicitly. Their priors are hand-designed and their corresponding weights are heuristically assigned. Hence, deep learning methods often produce superior image restoration quality. Deep networks are, however, capable of inducing strong and hardly predictable hallucinations. Networks implicitly learn to be jointly faithful to the observed data while learning an image prior; and the separation of original data and hallucinated data downstream is then not possible. This limits their wide-spread adoption in image restoration. Furthermore, it is often the hallucinated part that is victim to degradation-model overfitting. We present an approach with decoupled network-prior based hallucination and data fidelity terms. We refer to our framework as the Bayesian Integration of a Generative Prior (BIGPrior). Our method is rooted in a Bayesian framework and tightly connected to classic restoration methods. In fact, it can be viewed as a generalization of a large family of classic restoration algorithms. We use network inversion to extract image prior information from a generative network. We show that, on image colorization, inpainting and denoising, our framework consistently improves the inversion results. Our method, though partly reliant on the quality of the generative network inversion, is competitive with state-of-the-art supervised and task-specific restoration methods. It also provides an additional metric that sets forth the degree of prior reliance per pixel relative to data fidelity.
翻译:经典图像恢复算法使用各种前科, 隐含或明确地使用各种前科。 它们的前科是手工设计的, 对应的重量是超常的。 因此, 深层次的学习方法往往产生更优的图像恢复质量。 但是, 深层的网络能够诱发强烈和难以预测的幻觉。 网络隐含地学会在学习先前的图像的同时共同忠实于观察到的数据; 这样就不可能将原始的数据和下游的拉风数据分离出来。 这限制了它们广泛采用图像恢复的典型算法。 此外, 我们通常会将幻觉部分变成退化模型的过度利用。 我们展示了一种方法, 以分层的网络前端为基础, 我们称之为Bayesian Genemination Genteral( BIGPrior) 。 我们的方法植根植根于贝亚框架, 并且与经典恢复方法紧密地连接。 我们使用网络的相对质量, 也展示了我们之前的变色化, 也就是在基因变原化过程中, 不断的变色化, 和变原型化的系统化方法, 也部分地改进了我们的基因化。