Actively planning sensor views during object reconstruction is essential to autonomous mobile robots. This task is usually performed by evaluating information gain from an explicit uncertainty map. Existing algorithms compare options among a set of preset candidate views and select the next-best-view from them. In contrast to these, we take the emerging implicit representation as the object model and seamlessly combine it with the active reconstruction task. To fully integrate observation information into the model, we propose a supervision method specifically for object-level reconstruction that considers both valid and free space. Additionally, to directly evaluate view information from the implicit object model, we introduce a sample-based uncertainty evaluation method. It samples points on rays directly from the object model and uses variations of implicit function inferences as the uncertainty metrics, with no need for voxel traversal or an additional information map. Leveraging the differentiability of our metrics, it is possible to optimize the next-best-view by maximizing the uncertainty continuously. This does away with the traditionally-used candidate views setting, which may provide sub-optimal results. Experiments in simulations and real-world scenes show that our method effectively improves the reconstruction accuracy and the view-planning efficiency of active reconstruction tasks. The proposed system is going to open source at https://github.com/HITSZ-NRSL/ActiveImplicitRecon.git.
翻译:摘要:在对象重建期间主动规划传感器视图对于自主移动机器人至关重要。这个任务通常是通过评估明确的不确定性图的信息收益来实现的。现有算法通过比较一组预设的候选视图中的选项,并从中选择最佳下一视图来执行此任务。与此相反,我们将新兴的隐式表示视作物体模型,并将其与主动重建任务无缝结合。为了将观察信息充分整合到模型中,我们提出了一种针对物体级重建的监督方法,该方法同时考虑有效空间和自由空间。此外,为了直接评估来自隐式对象模型的视图信息,我们引入了一种基于样本的不确定性评估方法。它直接从对象模型上的光线上采样点,并使用隐式函数推断的变化作为不确定性度量,无需体素遍历或额外的信息图。利用我们的指标的可微特性,可以通过连续最大化不确定性来优化"最优下一个视角",这消除了通常使用的候选视图设置,可能会提供次优结果。在仿真和实际场景中的实验表明,我们的方法有效地提高了主动重建任务的重建精度和视角规划效率。所提出的系统将在 https://github.com/HITSZ-NRSL/ActiveImplicitRecon.git 中开源。