The proposal of perceptual loss solves the problem that per-pixel difference loss function causes the reconstructed image to be overly-smooth, which acquires a significant progress in the field of single image super-resolution reconstruction. Furthermore, the generative adversarial networks (GAN) is applied to the super-resolution field, which effectively improves the visual quality of the reconstructed image. However, under the condtion of high upscaling factors, the excessive abnormal reasoning of the network produces some distorted structures, so that there is a certain deviation between the reconstructed image and the ground-truth image. In order to fundamentally improve the quality of reconstructed images, this paper proposes a effective method called Dual Perceptual Loss (DP Loss), which is used to replace the original perceptual loss to solve the problem of single image super-resolution reconstruction. Due to the complementary property between the VGG features and the ResNet features, the proposed DP Loss considers the advantages of learning two features simultaneously, which significantly improves the reconstruction effect of images. The qualitative and quantitative analysis on benchmark datasets demonstrates the superiority of our proposed method over state-of-the-art super-resolution methods.
翻译:感知损失的建议解决了这样的问题,即每像素差异损失功能导致重塑图像过于悬殊,这在单一图像超分辨率重建领域取得重大进展。此外,基因对抗网络(GAN)适用于超分辨率场,这有效地提高了已重建图像的视觉质量。然而,在高尺度因素的调和下,网络过度异常推理产生了一些扭曲的结构,从而在重建图像和地面真相图像之间出现了某种偏差。为了从根本上改善已重建图像的质量,本文件提出了一种有效的方法,称为“双重感知损失”(DP损失),用以取代原先的感知损失,解决单一图像超分辨率重建问题。由于VGG特征和ResNet特征之间的互补属性,拟议的DP损失方案考虑了同时学习两个特征的好处,这些特征大大改善了图像的重建效果。关于基准数据集的定性和定量分析表明我们拟议方法优于状态超分辨率方法的优势。