We consider the single image super-resolution (SISR) problem, where a high-resolution (HR) image is generated based on a low-resolution (LR) input. Recently, generative adversarial networks (GANs) become popular to hallucinate details. Most methods along this line rely on a predefined single-LR-single-HR mapping, which is not flexible enough for the SISR task. Also, GAN-generated fake details may often undermine the realism of the whole image. We address these issues by proposing best-buddy GANs (Beby-GAN) for rich-detail SISR. Relaxing the immutable one-to-one constraint, we allow the estimated patches to dynamically seek the best supervision during training, which is beneficial to producing more reasonable details. Besides, we propose a region-aware adversarial learning strategy that directs our model to focus on generating details for textured areas adaptively. Extensive experiments justify the effectiveness of our method. An ultra-high-resolution 4K dataset is also constructed to facilitate future super-resolution research.
翻译:我们认为单一图像超分辨率(SISR)问题,即根据低分辨率(LR)输入生成高分辨率(HR)图像的问题。最近,基因对抗网络(GANs)在幻觉中变得很受欢迎。这条线上的大多数方法依赖于预先定义的单一LR-sing-sing-HR绘图,这对于SISSR的任务来说不够灵活。此外,GAN产生的假细节往往会破坏整个图像的真实性。我们通过提出最优秀的buddy GANs(Beby-GAN)来解决这些问题。放松不可改变的一对一的限制,我们允许估计的补丁在培训期间动态地寻求最好的监督,这有利于产生更合理的细节。此外,我们提议了一个区域觉识的对抗学习战略,引导我们的模型侧重于为感光化地区提供细节,适应性地。广泛的实验证明我们的方法的有效性是合理的。还设计了一个超高分辨率4K数据集,以便利未来的超分辨率研究。