We introduce VideoFlow, a novel optical flow estimation framework for videos. In contrast to previous methods that learn to estimate optical flow from two frames, VideoFlow concurrently estimates bi-directional optical flows for multiple frames that are available in videos by sufficiently exploiting temporal cues. We first propose a TRi-frame Optical Flow (TROF) module that estimates bi-directional optical flows for the center frame in a three-frame manner. The information of the frame triplet is iteratively fused onto the center frame. To extend TROF for handling more frames, we further propose a MOtion Propagation (MOP) module that bridges multiple TROFs and propagates motion features between adjacent TROFs. With the iterative flow estimation refinement, the information fused in individual TROFs can be propagated into the whole sequence via MOP. By effectively exploiting video information, VideoFlow presents extraordinary performance, ranking 1st on all public benchmarks. On the Sintel benchmark, VideoFlow achieves 1.649 and 0.991 average end-point-error (AEPE) on the final and clean passes, a 15.1% and 7.6% error reduction from the best published results (1.943 and 1.073 from FlowFormer++). On the KITTI-2015 benchmark, VideoFlow achieves an F1-all error of 3.65%, a 19.2% error reduction from the best published result (4.52% from FlowFormer++).
翻译:我们引入了一个新颖的视频光流估计框架VideoFlow。与以往从两帧图像学习光流估计的方法不同,VideoFlow通过充分利用时间线索,同时为可用于视频中的多帧估计双向光流。 我们首先提出了一个TRi-frame Optical Flow (TROF)模块,以三帧方式估计中心帧的双向光流。 帧三元组的信息被迭代地融合到中心帧上。为了扩展TROF以处理更多帧,我们进一步提出了一种MOtion Propagation (MOP)模块,用于连接多个TROF并在相邻的TROF之间传播运动特征。通过迭代的流估计精炼,融合在单个TROF中的信息可以通过MOP传播到整个序列中。通过有效利用视频信息,VideoFlow呈现出非凡的性能,在所有公共基准测试中排名第一。在Sintel基准测试中,VideoFlow在最终和清晰通道上分别达到1.649和0.991的平均端点误差(AEPE),比最佳发布结果(FlowFormer++的1.943和1.073)减少了15.1%和7.6%的误差。 在KITTI-2015基准测试中,VideoFlow实现了3.65%的F1-all误差,比最佳发布结果(FlowFormer++的4.52%)减少了19.2%。