Abrupt motion of camera or objects in a scene result in a blurry video, and therefore recovering high quality video requires two types of enhancements: visual enhancement and temporal upsampling. A broad range of research attempted to recover clean frames from blurred image sequences or temporally upsample frames by interpolation, yet there are very limited studies handling both problems jointly. In this work, we present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner. We design our framework by first learning the pixel-level motion that caused the blur from the given inputs via optical flow estimation and then predict multiple clean frames by warping the decoded features with the estimated flows. To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule. The effectiveness and favorability of our approach are highlighted through extensive qualitative and quantitative evaluations on motion-blurred datasets from high speed videos.
翻译:摄像头或物体在现场的振动导致视频模糊不清,因此,要恢复高质量的视频,需要两种类型的强化:视觉增强和时间上升取样。一系列广泛的研究试图通过内插从模糊的图像序列或时间上向上取样框中恢复干净的框框,然而,联合处理这两个问题的研究非常有限。在这项工作中,我们提出了一个新的框架,以便以端到端的方式,从一个被卷动的视频中分解、插插插和外插锐利框。我们设计我们的框架时首先了解通过光学流估计从给定的投入中产生模糊的像素级运动,然后通过将解码特征与估计的流扭曲来预测多个干净的框。为了确保预测框架之间的时间一致性并解决潜在的时间模糊性,我们提出了一个简单而有效的流基规则。我们的方法的有效性和可取性通过对高速视频的移动布云集数据进行广泛的定性和定量评价来突出。