We present a novel approach to video deblurring by fitting a deep network to the test video. One key observation is that some frames in a video with motion blur are much sharper than others, and thus we can transfer the texture information in those sharp frames to blurry frames. Our approach heuristically selects sharp frames from a video and then trains a convolutional neural network on these sharp frames. The trained network often absorbs enough details in the scene to perform deblurring on all the video frames. As an internal learning method, our approach has no domain gap between training and test data, which is a problematic issue for existing video deblurring approaches. The conducted experiments on real-world video data show that our model can reconstruct clearer and sharper videos than state-of-the-art video deblurring approaches. Code and data are available at https://github.com/xrenaa/Deblur-by-Fitting.
翻译:我们通过在试录象上安装一个深层网络,展示了一种新颖的视频模糊化方法。 一种关键观察是,在视频中有些带运动模糊化的框比其他框亮得多, 因此我们可以将这些锋利框架的纹理信息转移到模糊化框架。 我们的方法是从视频中选择锋利的框架,然后在这些锋利的框上培训一个革命性神经网络。 受过训练的网络常常吸收到现场足够的细节,以便在视频框上进行分解。 作为内部学习的一种方法,我们的方法在培训和测试数据之间没有域间的差距,这是现有视频模糊化方法的一个问题。 在现实世界视频数据上进行的实验表明,我们的模型可以重建比最先进的视频模糊化方法更清晰和更清晰的视频。 代码和数据可以在 https://github.com/xrenaa/Deblur-by-Fittinging上查阅。