Seamless Human-Robot Interaction is the ultimate goal of developing service robotic systems. For this, the robotic agents have to understand their surroundings to better complete a given task. Semantic scene understanding allows a robotic agent to extract semantic knowledge about the objects in the environment. In this work, we present a semantic scene understanding pipeline that fuses 2D and 3D detection branches to generate a semantic map of the environment. The 2D mask proposals from state-of-the-art 2D detectors are inverse-projected to the 3D space and combined with 3D detections from point segmentation networks. Unlike previous works that were evaluated on collected datasets, we test our pipeline on an active photo-realistic robotic environment - BenchBot. Our novelty includes rectification of 3D proposals using projected 2D detections and modality fusion based on object size. This work is done as part of the Robotic Vision Scene Understanding Challenge (RVSU). The performance evaluation demonstrates that our pipeline has improved on baseline methods without significant computational bottleneck.
翻译:无缝人类机器人互动是开发服务机器人系统的最终目标。 为此, 机器人代理人必须了解周围环境, 才能更好地完成指定的任务 。 语义场景理解允许机器人代理人获取关于环境中物体的语义知识 。 在这项工作中, 我们提出了一个语义场理解管道, 将 2D 和 3D 探测分支连接起来, 以生成环境的语义图 。 来自 最新2D 探测器的 2D 遮罩建议被反射到 3D 空间, 并与 点分割网络 的 3D 探测相结合 。 与以前在所收集的数据集上评估过的工程不同, 我们用一个活跃的光真知灼见的机器人环境测试我们的管道 。 我们的新概念包括利用基于天体大小的预测 2D 探测和 模式融合对 3D 提议进行校正。 这项工作是作为机器人视觉理解挑战( RVSU ) 的一部分完成的 。 绩效评估表明, 我们的管道在基准方法上已经改进了, 没有重要的计算瓶子 。