Learning neural networks using only a small amount of data is an important ongoing research topic with tremendous potential for applications. In this paper, we introduce a regularizer for the variational modeling of inverse problems in imaging based on normalizing flows. Our regularizer, called patchNR, involves a normalizing flow learned on patches of very few images. In particular, the training is independent from the considered inverse problem such that the same regularizer can be used for different forward operators acting on the same class of images. By investigating the distribution of patches versus those of the whole image class, we prove that our variational model is indeed a MAP approach. Our model can be generalized to conditional patchNRs, if additional supervised information is available. Numerical examples for superresolution of material images and low-dose or limited-angle computed tomography (CT) demonstrate that our method provides high quality results among methods with similar assumptions, but requires only few data.
翻译:仅使用少量数据的学习神经网络是一个正在进行的重要研究课题,具有巨大的应用潜力。 在本文中, 我们引入一个常规化的模型, 用于根据正常流量对图像中的反问题进行变异建模。 我们的常规化器称为补丁NR, 涉及在极少数图像的补丁上学习的正常化流。 特别是, 培训独立于被考虑的反向问题, 使不同的前方操作者能够在同一类图像上使用同样的常规化器。 通过调查补丁的分布和整个图像类的分布, 我们证明我们的变异模型确实是一种MAP方法。 我们的模型可以推广到有条件的补丁NRs, 如果有额外的监管信息的话。 材料图像超分辨率和低剂量或有限角度计算成像学的数值示例(CT) 表明我们的方法在类似假设的方法中提供了高质量的结果, 但只需要很少的数据。