The deep convolutional neural networks (CNNs) using attention mechanism have achieved great success for dynamic scene deblurring. In most of these networks, only the features refined by the attention maps can be passed to the next layer and the attention maps of different layers are separated from each other, which does not make full use of the attention information from different layers in the CNN. To address this problem, we introduce a new continuous cross-layer attention transmission (CCLAT) mechanism that can exploit hierarchical attention information from all the convolutional layers. Based on the CCLAT mechanism, we use a very simple attention module to construct a novel residual dense attention fusion block (RDAFB). In RDAFB, the attention maps inferred from the outputs of the preceding RDAFB and each layer are directly connected to the subsequent ones, leading to a CRLAT mechanism. Taking RDAFB as the building block, we design an effective architecture for dynamic scene deblurring named RDAFNet. The experiments on benchmark datasets show that the proposed model outperforms the state-of-the-art deblurring approaches, and demonstrate the effectiveness of CCLAT mechanism. The source code is available on: https://github.com/xjmz6/RDAFNet.
翻译:使用关注机制的深层革命神经网络(CNNs)在动态场景布局上取得了巨大成功。在大多数这些网络中,只有通过关注地图改进的特征才能被传送到下一层,不同层的注意地图相互分离,而不同层的注意地图没有充分利用CNN不同层的注意信息。为解决这一问题,我们引入了新的连续的跨层关注传输(CCLAT)机制,可以利用来自所有革命层的高层关注信息。基于CCLAT机制,我们使用一个非常简单的关注模块来构建一个新的残余密集关注聚变块(RDAFBB)。在RDAFB中,从先前的RDAFB和每个层的产出中推断出的注意地图与随后的图直接相连,从而形成CRLAT机制。我们以RDAFBB作为建筑块,我们设计了一个有效的动态场景解冻结构,名为RDAFAFNet。基准数据集实验显示,拟议的模型超越了国家-艺术脱布拉林方法,并展示了CCAF/RDRD机制的有效性。