Video frame interpolation is a challenging task due to the ever-changing real-world scene. Previous methods often calculate the bi-directional optical flows and then predict the intermediate optical flows under the linear motion assumptions, leading to isotropic intermediate flow generation. Follow-up research obtained anisotropic adjustment through estimated higher-order motion information with extra frames. Based on the motion assumptions, their methods are hard to model the complicated motion in real scenes. In this paper, we propose an end-to-end training method A^2OF for video frame interpolation with event-driven Anisotropic Adjustment of Optical Flows. Specifically, we use events to generate optical flow distribution masks for the intermediate optical flow, which can model the complicated motion between two frames. Our proposed method outperforms the previous methods in video frame interpolation, taking supervised event-based video interpolation to a higher stage.
翻译:视频框架的内插是一项具有挑战性的任务,因为现实世界的场景不断变化。 以往的方法常常计算双向光学流,然后预测线性运动假设下的中间光学流,从而产生异热带中间流。 后续研究通过估计高阶运动信息以额外框架获得了动脉调整。 根据运动假设,他们的方法很难在真实场景中模拟复杂的运动。 在本文中,我们提议了视频框架的终端到终端培训方法A%2OF, 即视频框架与事件驱动的光学流动的奥尼斯调控的对接。 具体地说,我们利用事件生成中间光学流的光学流分布面罩,它可以模拟两个框架之间的复杂运动。我们拟议的方法比视频框架内插的以往方法要好,将监督的事件视频内插方法提高到更高的阶段。