Convolutional neural networks have allowed remarkable advances in single image super-resolution (SISR) over the last decade. Among recent advances in SISR, attention mechanisms are crucial for high-performance SR models. However, the attention mechanism remains unclear on why it works and how it works in SISR. In this work, we attempt to quantify and visualize attention mechanisms in SISR and show that not all attention modules are equally beneficial. We then propose attention in attention network (A$^2$N) for more efficient and accurate SISR. Specifically, A$^2$N consists of a non-attention branch and a coupling attention branch. A dynamic attention module is proposed to generate weights for these two branches to suppress unwanted attention adjustments dynamically, where the weights change adaptively according to the input features. This allows attention modules to specialize to beneficial examples without otherwise penalties and thus greatly improve the capacity of the attention network with few parameters overhead. Experimental results demonstrate that our final model A$^2$N could achieve superior trade-off performances comparing with state-of-the-art networks of similar sizes. Codes are available at https://github.com/haoyuc/A2N.
翻译:过去十年来,在单一图像超分辨率(SISR)方面取得了显著的进步;在SISSR的最近进展中,关注机制对于高性能SR模式至关重要;然而,关注机制对于它为何运作以及它在SISR中如何运作仍然不清楚;在这项工作中,我们试图量化和直观SISR的关注机制,并表明并非所有关注模块都具有同等好处;然后,我们提议在关注网络中关注(A$2N),以提高效率和准确的SISR。具体地说,A$2N由非关注分支和一个混合关注分支组成。提议了一个动态关注模块,为这两个分支产生权重,以动态抑制不必要的关注调整,根据投入特征调整重量,使关注模块在不受到其他处罚的情况下专门研究有益的实例,从而大大提高关注网络的能力,同时采用少数参数管理。实验结果表明,我们的最后模型A$2N可以实现与类似规模的州-艺术网络相比较的高级交易性业绩。