We propose a novel robotic system that can improve its semantic perception during deployment. Contrary to the established approach of learning semantics from large datasets and deploying fixed models, we propose a framework in which semantic models are continuously updated on the robot to adapt to the deployment environments. Our system therefore tightly couples multi-sensor perception and localisation to continuously learn from self-supervised pseudo labels. We study this system in the context of a construction robot registering LiDAR scans of cluttered environments against building models. Our experiments show how the robot's semantic perception improves during deployment and how this translates into improved 3D localisation by filtering the clutter out of the LiDAR scan, even across drastically different environments. We further study the risk of catastrophic forgetting that such a continuous learning setting poses. We find memory replay an effective measure to reduce forgetting and show how the robotic system can improve even when switching between different environments. On average, our system improves by 60% in segmentation and 10% in localisation compared to deployment of a fixed model, and it keeps this improvement up while adapting to further environments.
翻译:我们提出一个新的机器人系统,可以在部署期间改进其语义感知。 与从大型数据集中学习语义学和部署固定模型的既定方法相反,我们提出一个框架,让语义模型不断更新机器人,以适应部署环境。 因此,我们的系统紧密结合了多感知和本地化,以不断从自我监督的假标签中学习。 我们在一个建筑机器人中研究这个系统,记录LIDAR对建筑模型的杂乱环境的扫描。 我们的实验表明,机器人语义感在部署期间是如何改进的,以及这如何通过过滤LIDAR扫描的布局,甚至在截然不同的环境中,将3D本地化转化为改进的。 我们进一步研究了灾难性地忘记这种持续学习环境构成的风险。 我们发现记忆重现了一个有效的措施,以减少遗忘,并显示即使在不同环境之间转换,机器人系统也能得到改善。 平均而言,我们的系统在分化方面改进了60%,在本地化方面改进了10%,而与固定模型的部署相比,在本地化方面则改进了10%。