3D scene flow estimation is a vital tool in perceiving our environment given depth or range sensors. Unlike optical flow, the data is usually sparse and in most cases partially occluded in between two temporal samplings. Here we propose a new scene flow architecture called OGSF-Net which tightly couples the learning for both flow and occlusions between frames. Their coupled symbiosis results in a more accurate prediction of flow in space. Unlike a traditional multi-action network, our unified approach is fused throughout the network, boosting performances for both occlusion detection and flow estimation. Our architecture is the first to gauge the occlusion in 3D scene flow estimation on point clouds. In key datasets such as Flyingthings3D and KITTI, we achieve the state-of-the-art results.
翻译:3D 场景流量估计是感应器或射程传感器观测我们环境的重要工具。 与光学流不同, 数据通常稀少, 在大多数情况下, 部分被分解于两个时间抽样之间。 我们在此提议一个新的场景流量结构, 名为 OGSF- Net, 紧紧结合对各框架之间流动和隔开的学习。 它们结合的共生关系可以更准确地预测空间的流量。 与传统的多功能网络不同, 我们的统一方法在整个网络中融合, 提升了分解探测和流量估测的性能。 我们的架构是第一个测量点云3D 场景流量估计的分界。 在飞航3D 和 KITTI 等关键数据集中, 我们实现了最先进的结果 。