Online semantic 3D segmentation in company with real-time RGB-D reconstruction poses special challenges such as how to perform 3D convolution directly over the progressively fused 3D geometric data, and how to smartly fuse information from frame to frame. We propose a novel fusion-aware 3D point convolution which operates directly on the geometric surface being reconstructed and exploits effectively the inter-frame correlation for high quality 3D feature learning. This is enabled by a dedicated dynamic data structure which organizes the online acquired point cloud with global-local trees. Globally, we compile the online reconstructed 3D points into an incrementally growing coordinate interval tree, enabling fast point insertion and neighborhood query. Locally, we maintain the neighborhood information for each point using an octree whose construction benefits from the fast query of the global tree.Both levels of trees update dynamically and help the 3D convolution effectively exploits the temporal coherence for effective information fusion across RGB-D frames.
翻译:与实时 RGB-D 重建的 RGB-D 公司进行在线语义 3D 分解,这带来了特殊的挑战,例如如何直接在逐步连接的 3D 几何数据上进行3D 3D 分解,以及如何将信息从框架到框架的智能集成。我们提议了一个新型的聚变觉 3D 点分解,直接在正在重建的几何表面上运作,并有效地利用高质量的 3D 特征学习的跨框架相关性。这得益于一个专门的动态数据结构,该结构将在线获得的点云与全球- 本地的树木组织起来。 在全球范围内,我们将重建的 3D 点编成一个逐渐增长的协调间隔树, 使得能够快速插入点和相邻查询。 在当地, 我们使用从全球树的快速查询中受益的奥科树来维护每个点的周边信息。 两层树木的构造水平动态更新并帮助 3D 3D 革命有效地利用时间一致性, 在整个 RGB- D 框架的有效信息融合。