Real-world image super-resolution (SR) tasks often do not have paired datasets limiting the application of supervised techniques. As a result, the tasks are usually approached by unpaired techniques based on Generative Adversarial Networks (GANs) which yield complex training losses with several regularization terms such as content and identity losses. We theoretically investigate the optimization problems which arise in such models and find two surprising observations. First, the learned SR map is always an optimal transport (OT) map. Second, we empirically show that the learned map is biased, i.e., it may not actually transform the distribution of low-resolution images to high-resolution images. Inspired by these findings, we propose an algorithm for unpaired SR which learns an unbiased OT map for the perceptual transport cost. Unlike existing GAN-based alternatives, our algorithm has a simple optimization objective reducing the neccesity to perform complex hyperparameter selection and use additional regularizations. At the same time, it provides nearly state-of-the-art performance on the large-scale unpaired AIM-19 dataset.
翻译:现实世界图像超分辨率(SR)任务往往没有配对限制受监督技术应用的数据集,因此,任务通常由基于创用反对流网络(GANs)的未受控制技术处理,这些技术产生复杂的培训损失,具有若干正规化术语,如内容和身份损失。我们理论上调查了这些模型中出现的优化问题,发现了两个令人惊讶的观察。首先,学习过的SR地图始终是最佳运输地图。第二,我们经验显示,所学的地图有偏差,也就是说,它可能实际上不会将低分辨率图像的分布转换为高分辨率图像。受这些发现的影响,我们建议为未受保护的SR(GANs)算法,该算法为感知性运输成本学习不带偏见的OT地图。与现有的基于GAN的替代方法不同,我们的算法有一个简单的优化目标,即减少神经性,以进行复杂的超分辨率选择和使用额外的规范化。同时,它提供了大规模未受控的AIM-19数据集的近状态性表现。