Convolutional neural networks have allowed remarkable advances in single image super-resolution (SISR) over the last decade. Among recent advances in SISR, attention mechanisms are crucial for high-performance SR models. However, the attention mechanism remains unclear on why and how it works in SISR. In this work, we attempt to quantify and visualize attention mechanisms in SISR and show that not all attention modules are equally beneficial. We then propose attention in attention network (A$^2$N) for more efficient and accurate SISR. Specifically, A$^2$N consists of a non-attention branch and a coupling attention branch. A dynamic attention module is proposed to generate weights for these two branches to suppress unwanted attention adjustments dynamically, where the weights change adaptively according to the input features. This allows attention modules to specialize to beneficial examples without otherwise penalties and thus greatly improve the capacity of the attention network with few parameters overhead. Experimental results demonstrate that our final model A$^2$N could achieve superior trade-off performances comparing with state-of-the-art networks of similar sizes. Codes are available at https://github.com/haoyuc/A2N.
翻译:过去十年来,在单一图像超分辨率(SISR)方面取得了显著的进步;在SISSR的最近进展中,关注机制对于高性能SR模式至关重要;然而,关注机制对于它为什么和如何在SISR中发挥作用仍然不清楚;在这项工作中,我们试图量化和想象SISR的关注机制,并表明并非所有关注模块都具有同等好处;然后我们建议关注网络(A$2N)关注更高效和准确的SISSR。具体地说,A$2N由一个不留意分支和一个混合关注分支组成。提议了一个动态关注模块,为这两个分支产生权重,以动态抑制不必要的关注调整,根据投入特征调整重量,使关注模块能够专门突出有益的实例,而无需其他处罚,从而大大提高关注网络的能力,同时少有几个参数的间接费用。实验结果表明,我们的最后模型A$2N可以实现与类似规模的州-艺术网络相比较的高级交易性业绩。