Image motion blur usually results from moving objects or camera shakes. Such blur is generally directional and non-uniform. Previous research efforts attempt to solve non-uniform blur by using self-recurrent multi-scale or multi-patch architectures accompanying with self-attention. However, using self-recurrent frameworks typically leads to a longer inference time, while inter-pixel or inter-channel self-attention may cause excessive memory usage. This paper proposes blur-aware attention networks (BANet) that accomplish accurate and efficient deblurring via a single forward pass. Our BANet utilizes region-based self-attention with multi-kernel strip pooling to disentangle blur patterns of different degrees and with cascaded parallel dilated convolution to aggregate multi-scale content features. Extensive experimental results on the GoPro and HIDE benchmarks demonstrate that the proposed BANet performs favorably against the state-of-the-art in blurred image restoration and can provide deblurred results in real-time.
翻译:图像运动通常因移动对象或相机摇动而模糊。 这种模糊一般是方向性的和非统一的。 以往的研究工作试图通过使用自惯性自惯性多尺度或多档结构来解决非统一性模糊性。 但是, 使用自惯性框架通常会导致较长的发酵时间, 而中间像素或通道间自留可能导致过度记忆使用。 本文建议使用模糊的注意网络( Bannet ), 以便通过单一的远端通道实现准确和高效的分解。 我们的Panet 利用基于区域的自留和多环条聚在一起, 以解开不同程度的模糊模式, 并用相平行的相平行的变形变形来综合多尺度内容特征 。 GoPro 和 HIDE 基准的广泛实验结果显示, 拟议的Banet 与模糊图像恢复方面的最新技术相比表现得更好, 并且可以实时提供分解结果 。