Capturing general deforming scenes is crucial for many computer graphics and vision applications, and it is especially challenging when only a monocular RGB video of the scene is available. Competing methods assume dense point tracks, 3D templates, large-scale training datasets, or only capture small-scale deformations. In contrast to those, our method, Ub4D, makes none of these assumptions while outperforming the previous state of the art in challenging scenarios. Our technique includes two new, in the context of non-rigid 3D reconstruction, components, i.e., 1) A coordinate-based and implicit neural representation for non-rigid scenes, which enables an unbiased reconstruction of dynamic scenes, and 2) A novel dynamic scene flow loss, which enables the reconstruction of larger deformations. Results on our new dataset, which will be made publicly available, demonstrate the clear improvement over the state of the art in terms of surface reconstruction accuracy and robustness to large deformations. Visit the project page https://4dqv.mpi-inf.mpg.de/Ub4D/.
翻译:获取一般变形场景对于许多计算机图形和视觉应用至关重要,而且当仅提供现场的单向 RGB 视频时,它尤其具有挑战性。竞合方法假定的是密度点轨迹、 3D 模板、 大型培训数据集, 或只捕捉小规模变形。与这些不同的是,我们的Ub4D 方法在挑战性情景中表现优于先前的先进状态时,没有作出这些假设。我们的技术包括两个新的技术,即:在非硬化的 3D 重建背景下,组成部分,即:1) 非硬化场景的基于协调的和隐含的神经代表,使动态场景得到公正的重建;和2) 新的动态场景流损失,使更大规模的变形得以重建。我们新的数据集的结果将公布在地表重建精度和对大变形的坚固度方面展示出对最新状态的明显改进。访问项目网页 https://4dqv.mpi-inf.mpg.de/Ub4D/。