Meshes are commonly used as 3D maps since they encode the topology of the scene while being lightweight. Unfortunately, 3D meshes are mathematically difficult to handle directly because of their combinatorial and discrete nature. Therefore, most approaches generate 3D meshes of a scene after fusing depth data using volumetric or other representations. Nevertheless, volumetric fusion remains computationally expensive both in terms of speed and memory. In this paper, we leapfrog these intermediate representations and build a 3D mesh directly from a depth map and the sparse landmarks triangulated with visual odometry. To this end, we formulate a non-smooth convex optimization problem that we solve using a primal-dual method. Our approach generates a smooth and accurate 3D mesh that substantially improves the state-of-the-art on direct mesh reconstruction while running in real-time.
翻译:Meshes 通常用作 3D 地图, 因为它们在轻量度时对场景的地形进行编码。 不幸的是, 3D 模具在数学上很难直接处理, 因为它们具有组合性和离散性。 因此, 大多数方法在使用量子或其他表示方式对深度数据进行冷冻后产生 3D 模具。 然而, 体积聚合在速度和记忆方面仍然计算成本很高 。 在本文中, 我们跳过这些中间显示器, 并直接从深度地图和稀少的地标上建立一个 3D 模件, 并用视觉观察仪进行三角。 为此, 我们设计了一个非摩特的convex 优化问题, 我们用原始的二元方法来解决。 我们的方法产生了一个光滑和准确的 3D 模件, 大大改进直接网格重建的状态, 同时实时运行 。