We present FlowNet3D++, a deep scene flow estimation network. Inspired by classical methods, FlowNet3D++ incorporates geometric constraints in the form of point-to-plane distance and angular alignment between individual vectors in the flow field, into FlowNet3D. We demonstrate that the addition of these geometric loss terms improves the previous state-of-art FlowNet3D accuracy from 57.85% to 63.43%. To further demonstrate the effectiveness of our geometric constraints, we propose a benchmark for flow estimation on the task of dynamic 3D reconstruction, thus providing a more holistic and practical measure of performance than the breakdown of individual metrics previously used to evaluate scene flow. This is made possible through the contribution of a novel pipeline to integrate point-based scene flow predictions into a global dense volume. FlowNet3D++ achieves up to a 15.0% reduction in reconstruction error over FlowNet3D, and up to a 35.2% improvement over KillingFusion alone. We will release our scene flow estimation code later.
翻译:我们提出FlookNet3D+++,这是一个深场流估算网络。受经典方法启发,FlookNet3D++包含几何限制,其形式为:点到平面的距离,以及流动场中单个矢量之间的角对齐。我们证明,增加这些几何损失条件使以前的最先进的FlookNet3D精确度从57.85%提高到63.43%。为了进一步证明我们的几何限制的有效性,我们提出了一个关于动态3D重建任务流量估算的基准,从而提供了比先前用于评估现场流动的单个指标分解更全面、更实际的性能衡量标准。这是通过一条新型管道,将基于点的场流预测整合到一个全球稠密的体积中。FlookNet3D+++将重建误差从57.85%降低到63.43%,而光是杀死Fusion就达到35.2%的改进率。我们稍后将公布场流估算代码。