Despite convolutional network-based methods have boosted the performance of single image super-resolution (SISR), the huge computation costs restrict their practical applicability. In this paper, we develop a computation efficient yet accurate network based on the proposed attentive auxiliary features (A$^2$F) for SISR. Firstly, to explore the features from the bottom layers, the auxiliary feature from all the previous layers are projected into a common space. Then, to better utilize these projected auxiliary features and filter the redundant information, the channel attention is employed to select the most important common feature based on current layer feature. We incorporate these two modules into a block and implement it with a lightweight network. Experimental results on large-scale dataset demonstrate the effectiveness of the proposed model against the state-of-the-art (SOTA) SR methods. Notably, when parameters are less than 320k, A$^2$F outperforms SOTA methods for all scales, which proves its ability to better utilize the auxiliary features. Codes are available at https://github.com/wxxxxxxh/A2F-SR.
翻译:尽管基于网络的共变方法提高了单一图像超分辨率(SISR)的性能,但巨大的计算成本限制了它们的实际适用性。在本文中,我们根据SISSR的拟议关注辅助功能(A$2$F)开发了一个高效而准确的计算网络。首先,为了探索底层的特征,前所有层的辅助特征被投射到一个共同的空间。然后,为了更好地利用这些预测的辅助功能并过滤多余的信息,频道注意力被用于根据当前层特征选择最重要的共同特征。我们将这两个模块纳入一个区块,并使用轻量级网络实施。大规模数据集的实验结果显示拟议模型相对于最先进的SR方法的有效性。值得注意的是,当参数低于320k时,A$2$F将超越所有尺度的SOTA方法,这证明了它更好地利用辅助特征的能力。代码可在https://github.com/wxxxh/A2F-SR查阅。