Recently, many works have designed wider and deeper networks to achieve higher image super-resolution performance. Despite their outstanding performance, they still suffer from high computational resources, preventing them from directly applying to embedded devices. To reduce the computation resources and maintain performance, we propose a novel Ghost Residual Attention Network (GRAN) for efficient super-resolution. This paper introduces Ghost Residual Attention Block (GRAB) groups to overcome the drawbacks of the standard convolutional operation, i.e., redundancy of the intermediate feature. GRAB consists of the Ghost Module and Channel and Spatial Attention Module (CSAM) to alleviate the generation of redundant features. Specifically, Ghost Module can reveal information underlying intrinsic features by employing linear operations to replace the standard convolutions. Reducing redundant features by the Ghost Module, our model decreases memory and computing resource requirements in the network. The CSAM pays more comprehensive attention to where and what the feature extraction is, which is critical to recovering the image details. Experiments conducted on the benchmark datasets demonstrate the superior performance of our method in both qualitative and quantitative. Compared to the baseline models, we achieve higher performance with lower computational resources, whose parameters and FLOPs have decreased by more than ten times.
翻译:最近,许多作品设计了更广泛和更深的网络,以实现更高的图像超分辨率性能。尽管它们表现出色,但它们仍然受到高计算资源的影响,无法直接应用嵌入装置。为了减少计算资源并保持性能,我们提议为高效超级分辨率建立一个新型的幽灵残余关注网络(GRAAN)。本文件介绍了幽灵残余关注区(GRAB)小组,以克服标准革命操作的缺陷,即中间特性的冗余。GRAB由幽灵模块以及频道和空间关注模块(CSAAM)组成,以缓解冗余特性的生成。具体地说,Ghost模块可以通过使用线性操作来揭示内在特征背后的信息,以取代标准集成。减少幽灵模块的冗余性特征,我们的模型会减少记忆力和计算网络的资源需求。CSAAM更全面地关注特征提取的地点和内容,这对于恢复图像细节至关重要。在基准数据集上进行的实验表明,我们的方法在质和量两个方面都表现优。与基线模型相比,我们用较低的计算资源实现更高的性能,其参数比FOPL减少10倍。</s>