This paper describes a method of online refinement of a scene recognition model for robot navigation considering traversable plants, flexible plant parts which a robot can push aside while moving. In scene recognition systems that consider traversable plants growing out to the paths, misclassification may lead the robot to getting stuck due to the traversable plants recognized as obstacles. Yet, misclassification is inevitable in any estimation methods. In this work, we propose a framework that allows for refining a semantic segmentation model on the fly during the robot's operation. We introduce a few-shot segmentation based on weight imprinting for online model refinement without fine-tuning. Training data are collected via observation of a human's interaction with the plant parts. We propose novel robust weight imprinting to mitigate the effect of noise included in the masks generated by the interaction. The proposed method was evaluated through experiments using real-world data and shown to outperform an ordinary weight imprinting and provide competitive results to fine-tuning with model distillation while requiring less computational cost.
翻译:本文描述了对机器人导航的场景识别模型进行在线改进的方法,其中考虑到可穿行的植物、机器人在移动时可以抛开的灵活植物部件。在考虑可穿行的植物生长到路径的场景识别系统中,错误分类可能导致机器人被卡在被确认为障碍的可穿行的植物上。然而,在任何估计方法中,错误分类是不可避免的。在这项工作中,我们提出了一个框架,允许在机器人操作期间对飞行上的语义分割模型进行精细的改进。我们引入了基于在不作微调的情况下为在线模型的改进打印重量的微粒分化。通过观察人类与植物部件的互动来收集培训数据。我们建议采用新型强力重印,以减轻相互作用产生的面具中噪音的影响。我们通过实验使用真实世界数据对拟议方法进行了评估,并显示该方法比普通的重量印印刷量要高,并显示具有竞争性的结果,以便在不需要计算成本的情况下对模型蒸馏进行微调整。