Previous online 3D dense reconstruction methods struggle to achieve the balance between memory storage and surface quality, largely due to the usage of stagnant underlying geometry representation, such as TSDF (truncated signed distance functions) or surfels, without any knowledge of the scene priors. In this paper, we present DI-Fusion (Deep Implicit Fusion), based on a novel 3D representation, i.e. Probabilistic Local Implicit Voxels (PLIVoxs), for online 3D reconstruction with a commodity RGB-D camera. Our PLIVox encodes scene priors considering both the local geometry and uncertainty parameterized by a deep neural network. With such deep priors, we are able to perform online implicit 3D reconstruction achieving state-of-the-art camera trajectory estimation accuracy and mapping quality, while achieving better storage efficiency compared with previous online 3D reconstruction approaches. Our implementation is available at https://www.github.com/huangjh-pub/di-fusion.
翻译:先前的在线三维密集重建方法试图在记忆存储和表面质量之间实现平衡,这主要是因为使用了停滞的基本几何表示法,如TSDF(短短的签名距离函数)或冲浪仪,对现场前科一无所知。在本文中,我们介绍了DI-Fusion(深隐含混凝土),这是基于一种新型的三维代表法,即概率性地方隐性二氧化物(PLIVoxs),用于利用一种商品RGB-D相机进行在线三维重建。我们的PLIVox编码场景在考虑深神经网络对本地几何和不确定性进行参数参数比较之前就已进行。在如此深的以前,我们得以进行在线隐含的三维重建,实现最先进的相机轨迹估计准确性和绘图质量,同时实现与先前的在线三维重建方法相比更好的存储效率。我们的实施情况可在https://www.github.com/huangjh-pub/di-punclux查阅。