Autonomous robotic tasks require actively perceiving the environment to achieve application-specific goals. In this paper, we address the problem of positioning an RGB camera to collect the most informative images to represent an unknown scene, given a limited measurement budget. We propose a novel mapless planning framework to iteratively plan the next best camera view based on collected image measurements. A key aspect of our approach is a new technique for uncertainty estimation in image-based neural rendering, which guides measurement acquisition at the most uncertain view among view candidates, thus maximising the information value during data collection. By incrementally adding new measurements into our image collection, our approach efficiently explores an unknown scene in a mapless manner. We show that our uncertainty estimation is generalisable and valuable for view planning in unknown scenes. Our planning experiments using synthetic and real-world data verify that our uncertainty-guided approach finds informative images leading to more accurate scene representations when compared against baselines.
翻译:自主机器人任务要求积极观测环境以实现特定应用目标。 在本文中,我们解决了定位RGB相机以收集信息最丰富的图像来代表未知场景的问题,因为测量预算有限。我们提出了一个新的无地图规划框架,以根据所收集的图像测量,迭接地规划下一个最佳摄像视图。我们的方法的一个重要方面是,在基于图像的神经造影中进行不确定性估计的新技术,该技术以最不确定的视角指导在视觉候选人中进行测量的获取,从而在数据收集过程中实现信息价值最大化。通过在图像收集中增加新的测量,我们的方法以无地图的方式有效地探索了未知场景。我们表明,我们的不确定性估算是通用的,对于在未知场景中进行规划很有价值。我们利用合成和现实世界数据进行的规划实验证实,我们以不确定性为导向的方法发现信息图像,与基线相比,可以更准确地进行场景展示。</s>