Image motion blur results from a combination of object motions and camera shakes, and such blurring effect is generally directional and non-uniform. Previous research attempted to solve non-uniform blurs using self-recurrent multiscale, multi-patch, or multi-temporal architectures with self-attention to obtain decent results. However, using self-recurrent frameworks typically lead to a longer inference time, while inter-pixel or inter-channel self-attention may cause excessive memory usage. This paper proposes a Blur-aware Attention Network (BANet), that accomplishes accurate and efficient deblurring via a single forward pass. Our BANet utilizes region-based self-attention with multi-kernel strip pooling to disentangle blur patterns of different magnitudes and orientations and cascaded parallel dilated convolution to aggregate multi-scale content features. Extensive experimental results on the GoPro and RealBlur benchmarks demonstrate that the proposed BANet performs favorably against the state-of-the-arts in blurred image restoration and can provide deblurred results in real-time.
翻译:图像运动从物体动作和相机摇动的组合中模糊不清晰的结果,这种模糊效应一般是方向性的和非统一的。 先前的研究试图用自我经常性多级、多级、多级或多时态结构来解决非统一模糊现象,以获得体面的结果。 但是,使用自经常框架通常会导致较长的推论时间,而间断或间断自留可能导致过度的记忆使用。 本文提议建立一个模糊的注意网络( Banet ), 通过单一前方通道实现准确和高效的分解。 我们的Banet 利用基于区域的自我注意, 与多层条形条相聚, 以解开不同大小和方向的模糊模式, 以及向综合多级内容特征的平行相拉动。 GoPro 和 RealBur 基准的广泛实验结果显示, 拟议的Banet 与模糊图像恢复的状态相比表现得更好, 并且能够实时提供脱形结果 。