The key to video inpainting is to use correlation information from as many reference frames as possible. Existing flow-based propagation methods split the video synthesis process into multiple steps: flow completion -> pixel propagation -> synthesis. However, there is a significant drawback that the errors in each step continue to accumulate and amplify in the next step. To this end, we propose an Error Compensation Framework for Flow-guided Video Inpainting (ECFVI), which takes advantage of the flow-based method and offsets its weaknesses. We address the weakness with the newly designed flow completion module and the error compensation network that exploits the error guidance map. Our approach greatly improves the temporal consistency and the visual quality of the completed videos. Experimental results show the superior performance of our proposed method with the speed up of x6, compared to the state-of-the-art methods. In addition, we present a new benchmark dataset for evaluation by supplementing the weaknesses of existing test datasets.
翻译:视频油漆的关键是使用尽可能多的参考框架的关联信息。 现有的基于流的传播方法将视频合成过程分为多个步骤: 流程完成 - > 像素传播 - > 合成。 但是,每个步骤的错误在下一个步骤中继续累积和放大, 存在一个重大缺陷。 为此, 我们提议了流动引导视频油漆错误补偿框架(ECFVI), 该框架利用以流为基础的方法来弥补其弱点。 我们用新设计的流程完成模块和利用错误指导图的错误补偿网络来解决缺陷。 我们的方法极大地改进了已完成视频的时间一致性和视觉质量。 实验结果显示,与最新方法相比,我们建议的方法比x6的速度提高了。 此外, 我们提出了一个新的基准数据集,通过补充现有测试数据集的弱点来进行评估。