Constructing and maintaining a consistent scene model on-the-fly is the core task for online spatial perception, interpretation, and action. In this paper, we represent the scene with a Bayesian nonparametric mixture model, seamlessly describing per-point occupancy status with a continuous probability density function. Instead of following the conventional data fusion paradigm, we address the problem of online learning the process how sequential point cloud data are generated from the scene geometry. An incremental and parallel inference is performed to update the parameter space in real-time. We experimentally show that the proposed representation achieves state-of-the-art accuracy with promising efficiency. The consistent probabilistic formulation assures a generative model that is adaptive to different sensor characteristics, and the model complexity can be dynamically adjusted on-the-fly according to different data scales.
翻译:构建和维护一个一致的现场模型是在线空间认知、解释和行动的核心任务。 在本文中, 我们用一种巴伊西亚非参数混合模型代表场景, 以连续概率密度函数无缝地描述每个点占用状态。 我们不遵循常规数据聚合模式,而是解决在线学习如何从场景几何生成连续点云数据的过程的问题。 进行递增和平行的推论, 实时更新参数空间。 我们实验性地显示, 拟议的表达方式实现了最先进的准确性, 并且效率大有希望。 一致的概率配方保证了一种适应不同传感器特性的基因模型, 模型的复杂性可以根据不同的数据尺度动态地调整。