In this paper, we address the problem of dynamic scene deblurring in the presence of motion blur. Restoration of images affected by severe blur necessitates a network design with a large receptive field, which existing networks attempt to achieve through simple increment in the number of generic convolution layers, kernel-size, or the scales at which the image is processed. However, increasing the network capacity in this manner comes at the expense of increase in model size and inference speed, and ignoring the non-uniform nature of blur. We present a new architecture composed of spatially adaptive residual learning modules that implicitly discover the spatially varying shifts responsible for non-uniform blur in the input image and learn to modulate the filters. This capability is complemented by a self-attentive module which captures non-local relationships among the intermediate features and enhances the receptive field. We then incorporate a spatiotemporal recurrent module in the design to also facilitate efficient video deblurring. Our networks can implicitly model the spatially-varying deblurring process, while dispensing with multi-scale processing and large filters entirely. Extensive qualitative and quantitative comparisons with prior art on benchmark dynamic scene deblurring datasets clearly demonstrate the superiority of the proposed networks via reduction in model-size and significant improvements in accuracy and speed, enabling almost real-time deblurring.
翻译:在本文中,我们处理动态场面在运动模糊的情况下变形的问题。 恢复受严重模糊影响的图像需要有一个网络设计,需要有一个大可接收场,现有网络试图通过简单的递增一般卷变层、内核大小或图像处理尺度的数量来实现。然而,以这种方式提高网络能力,将牺牲模型大小和推导速度的增加,忽视模糊的非统一性质。我们展示了由空间适应性适应性残余学习模块组成的新结构,隐含地发现对输入图像中非统一模糊化负责的空间变化变化,并学习调整过滤器。这一能力得到一个自我强化模块的补充,该模块将捕捉中间特征之间的非本地关系,加强可接收场。我们随后在设计中加入一个波形时常态循环模块,以方便高效的视频模糊。我们的网络可以隐含地模拟空间波动分解过程,同时以多种规模的处理和大型过滤器完全进行。 广泛的质量和量化的变异性模型将快速化模型在前的动态模型上明显地展示了真实性升级的模型。