With the development of Deep Neural Networks (DNNs), plenty of methods based on DNNs have been proposed for Single Image Super-Resolution (SISR). However, existing methods mostly train the DNNs on uniformly sampled LR-HR patch pairs, which makes them fail to fully exploit informative patches within the image. In this paper, we present a simple yet effective data augmentation method. We first devise a heuristic metric to evaluate the informative importance of each patch pair. In order to reduce the computational cost for all patch pairs, we further propose to optimize the calculation of our metric by integral image, achieving about two orders of magnitude speedup. The training patch pairs are sampled according to their informative importance with our method. Extensive experiments show our sampling augmentation can consistently improve the convergence and boost the performance of various SISR architectures, including EDSR, RCAN, RDN, SRCNN and ESPCN across different scaling factors (x2, x3, x4). Code is available at https://github.com/littlepure2333/SamplingAug
翻译:随着深神经网络(DNNs)的发展,为单一图像超级分辨率(SISR)提出了大量基于DNS的方法,然而,现有的方法大多是用统一抽样的LR-HR补丁对DNNs进行训练,这样它们就无法充分利用图像中的信息补丁;在本文中,我们提出了一个简单而有效的数据增强方法;我们首先设计了一种粗略的衡量标准,以评价每个补丁对的信息重要性;为了减少所有补丁对的计算费用,我们进一步建议通过综合图像优化我们计量的计算,达到两个数量级的加速。培训补丁对按照它们与我们方法在信息方面的重要性进行抽样。广泛的实验表明,我们的抽样扩增率可以不断改善包括EDSR、RCAN、RDN、SRCNN和ESPCN在不同缩放系数(x2,x3,X4)。