Learning neural networks using only a small amount of data is an important ongoing research topic with tremendous potential for applications. In this paper, we introduce a regularizer for the variational modeling of inverse problems in imaging based on normalizing flows. Our regularizer, called patchNR, involves a normalizing flow learned on patches of very few images. The subsequent reconstruction method is completely unsupervised and the same regularizer can be used for different forward operators acting on the same class of images. By investigating the distribution of patches versus those of the whole image class, we prove that our variational model is indeed a MAP approach. Our model can be generalized to conditional patchNRs, if additional supervised information is available. Numerical examples for low-dose CT, limited-angle CT and superresolution of material images demonstrate that our method provides high quality results among unsupervised methods, but requires only few data.
翻译:仅使用少量数据的学习神经网络是一个正在进行的重要研究课题,具有巨大的应用潜力。 在本文中, 我们引入一个常规化的模型, 用于根据正常流流对图像中的反问题进行变异建模。 我们的常规化器称为补丁NR, 涉及在极少数图像的补丁上学习的正常流。 随后的重建方法完全无人监督, 同样的常规化器可以用于不同前方操作者在同一类图像上的行为。 通过调查补丁的分布和整个图像类的分布, 我们证明我们的变异模型确实是一种MAP 方法。 我们的模型可以推广到有条件的补丁NRs, 如果有额外的监管信息的话。 低剂量CT、 有限角CT 和 超分辨率的材料图像的数字示例表明, 我们的方法在非监控方法中提供了高质量的结果, 但只需要很少的数据 。