Convolutional neural networks are the most successful models in single image super-resolution. Deeper networks, residual connections, and attention mechanisms have further improved their performance. However, these strategies often improve the reconstruction performance at the expense of considerably increasing the computational cost. This paper introduces a new lightweight super-resolution model based on an efficient method for residual feature and attention aggregation. In order to make an efficient use of the residual features, these are hierarchically aggregated into feature banks for posterior usage at the network output. In parallel, a lightweight hierarchical attention mechanism extracts the most relevant features from the network into attention banks for improving the final output and preventing the information loss through the successive operations inside the network. Therefore, the processing is split into two independent paths of computation that can be simultaneously carried out, resulting in a highly efficient and effective model for reconstructing fine details on high-resolution images from their low-resolution counterparts. Our proposed architecture surpasses state-of-the-art performance in several datasets, while maintaining relatively low computation and memory footprint.
翻译:进化神经网络是单一图像超分辨率中最成功的模型。 深度网络、 剩余连接和关注机制进一步提高了它们的性能。 然而, 这些战略往往以大幅提高计算成本为代价改善重建绩效。 本文引入了一种新的轻量超分辨率模型, 以高效的剩余特性和注意力聚合方法为基础。 为了高效地使用剩余特性, 这些特性按等级汇总成特征库,供网络输出的后端使用。 与此同时, 轻量级关注机制从网络中提取出最相关的特征,通过网络内部连续运行改进最终产出,防止信息损失。 因此,处理过程被分成两个独立的计算路径,可以同时进行,从而形成高效和有效的模式,从低分辨率对应方重建高分辨率图像的精细细节。 我们提议的架构超过了几个数据集中最先进的功能,同时保持相对较低的计算和记忆足迹。