Deep convolutional neural networks (CNNs) have obtained remarkable performance in single image super-resolution (SISR). However, very deep networks can suffer from training difficulty and hardly achieve further performance gain. There are two main trends to solve that problem: improving the network architecture for better propagation of features through large number of layers and designing an attention mechanism for selecting most informative features. Recent SISR solutions propose advanced attention and self-attention mechanisms. However, constructing a network to use an attention block in the most efficient way is a challenging problem. To address this issue, we propose a general recursively defined residual block (RDRB) for better feature extraction and propagation through network layers. Based on RDRB we designed recursively defined residual network (RDRN), a novel network architecture which utilizes attention blocks efficiently. Extensive experiments show that the proposed model achieves state-of-the-art results on several popular super-resolution benchmarks and outperforms previous methods by up to 0.43 dB.
翻译:深相神经网络(CNNs)在单一图像超分辨率(SISR)中取得了显著的成绩。然而,非常深的网络可能面临培训困难,难以取得进一步的绩效收益。解决该问题有两个主要趋势:改进网络结构,通过大量层更好地传播特征,并设计一个关注机制以选择最丰富的信息特征。最近的SISM解决方案提出了高度关注和自我关注机制。然而,建立一个网络,以最有效的方式使用关注区块是一个具有挑战性的问题。为了解决这一问题,我们建议建立一个一般的循环定义的残余块(RDRB),以便通过网络层更好地进行特征提取和传播。基于RDRPRB,我们设计了循环定义的残余网络(RDRN),这是一个高效利用关注区块的新型网络结构。广泛的实验表明,拟议的模型在若干流行的超级分辨率基准上取得了最新结果,并超越了以往方法,最高达0.43 dB。