Video deblurring is a highly under-constrained problem due to the spatially and temporally varying blur. An intuitive approach for video deblurring includes two steps: a) detecting the blurry region in the current frame; b) utilizing the information from clear regions in adjacent frames for current frame deblurring. To realize this process, our idea is to detect the pixel-wise blur level of each frame and combine it with video deblurring. To this end, we propose a novel framework that utilizes the motion magnitude prior (MMP) as guidance for efficient deep video deblurring. Specifically, as the pixel movement along its trajectory during the exposure time is positively correlated to the level of motion blur, we first use the average magnitude of optical flow from the high-frequency sharp frames to generate the synthetic blurry frames and their corresponding pixel-wise motion magnitude maps. We then build a dataset including the blurry frame and MMP pairs. The MMP is then learned by a compact CNN by regression. The MMP consists of both spatial and temporal blur level information, which can be further integrated into an efficient recurrent neural network (RNN) for video deblurring. We conduct intensive experiments to validate the effectiveness of the proposed methods on the public datasets.
翻译:由于时间和空间差异的模糊性,视频脱色是一个非常不足的问题。视频脱色的直觉方法包括两个步骤:(a) 探测当前框架中的模糊性区域;(b) 利用相邻框架中的清晰区域信息进行当前框架脱色。为了实现这一进程,我们的想法是检测每个框架的像素模糊度,并将其与视频脱色结合起来。为此,我们提议了一个新框架,利用先前的运动量作为高效深度视频脱色的指南。具体地说,由于在暴露时间沿其轨迹的像素运动与运动模糊度有正相关关系,我们首先使用高频锐度框架的光学流动平均量来生成合成模糊性框架及其对应的像素运动量地图。我们然后建立一个数据集,包括模糊性框架和MMP配对。然后,通过压缩的CNNC学习M。 MMP由空间和时间模糊性水平信息组成,可以进一步整合高频度的常规神经网络,从而进一步整合到我们所拟议的不间断的常规神经网络。