Exploiting image patches instead of whole images have proved to be a powerful approach to tackle various problems in image processing. Recently, Wasserstein patch priors (WPP), which are based on the comparison of the patch distributions of the unknown image and a reference image, were successfully used as data-driven regularizers in the variational formulation of superresolution. However, for each input image, this approach requires the solution of a non-convex minimization problem which is computationally costly. In this paper, we propose to learn two kind of neural networks in an unsupervised way based on WPP loss functions. First, we show how convolutional neural networks (CNNs) can be incorporated. Once the network, called WPPNet, is learned, it can be very efficiently applied to any input image. Second, we incorporate conditional normalizing flows to provide a tool for uncertainty quantification. Numerical examples demonstrate the very good performance of WPPNets for superresolution in various image classes even if the forward operator is known only approximately.
翻译:利用图像补丁而不是整幅图像已证明是解决图像处理中各种问题的有力方法。 最近,瓦塞斯坦补丁前缀(WPP)在比较未知图像和参考图像的补丁分布的基础上,成功地作为数据驱动的正规化器,用于超分辨率的变异配制。然而,对于每个输入图像,这一方法需要解决非解密的最小化问题,这是计算成本高昂的。在本文中,我们提议根据WPP损失功能,以不受监督的方式学习两种神经网络。首先,我们展示如何将神经网络(CNNs)整合起来。一旦网络(称为WPPNet)被学习后,可以非常有效地应用到任何输入图像中。第二,我们将有条件的正常化流程纳入为不确定性量化提供一个工具。数字实例表明,即使远端操作器仅大致为已知,WPPNet在各种图像类中的超分辨率表现非常出色。