Recently, neural network for scene flow estimation show impressive results on automotive data such as the KITTI benchmark. However, despite of using sophisticated rigidity assumptions and parametrizations, such networks are typically limited to only two frame pairs which does not allow them to exploit temporal information. In our paper we address this shortcoming by proposing a novel multi-frame approach that considers an additional preceding stereo pair. To this end, we proceed in two steps: Firstly, building upon the recent RAFT-3D approach, we develop an advanced two-frame baseline by incorporating an improved stereo method. Secondly, and even more importantly, exploiting the specific modeling concepts of RAFT-3D, we propose a U-Net like architecture that performs a fusion of forward and backward flow estimates and hence allows to integrate temporal information on demand. Experiments on the KITTI benchmark do not only show that the advantages of the improved baseline and the temporal fusion approach complement each other, they also demonstrate that the computed scene flow is highly accurate. More precisely, our approach ranks second overall and first for the even more challenging foreground objects, in total outperforming the original RAFT-3D method by more than 16%. Code is available at https://github.com/cv-stuttgart/M-FUSE.
翻译:最近,用于现场流量估计的神经网络在诸如KITTI基准等汽车数据上取得了令人印象深刻的结果,然而,尽管使用了复杂的僵化假设和准光化,这种网络通常仅限于两个框架对,不允许它们利用时间信息。在我们的文件里,我们通过提出新的多框架方法来解决这一缺陷,这种方法考虑到另外的立体配对。为此,我们分两步走:首先,在最近RAFT-3D方法的基础上,我们制定了先进的双框架基线,采用了改进的立体法。第二,更重要的是,我们利用RAFT-3D的具体模型概念,我们提出了一种U-Net,类似结构,进行前向和后向流量估计的融合,从而能够将需求方面的时间信息整合在一起。关于KITTI基准的实验不仅表明改进后的基线和时间聚变异方法的优势是相互补充的,而且还表明,经过计算的场景流非常准确。更准确地说,我们的方法在总体上排第二位,对于更具有挑战性的地面物体来说,甚至排在前方一级,我们提出了一种U-Net结构,完全比原始的RAFT-3D-D-FA/FO 16%CRCRCFE可以更接近16%/FSE得到使用。