This paper describes a method of estimating the traversability of plant parts covering a path and navigating through them for mobile robots operating in plant-rich environments. Conventional mobile robots rely on scene recognition methods that consider only the geometric information of the environment. Those methods, therefore, cannot recognize paths as traversable when they are covered by flexible plants. In this paper, we present a novel framework of image-based scene recognition to realize navigation in such plant-rich environments. Our recognition model exploits a semantic segmentation branch for general object classification and a traversability estimation branch for estimating pixel-wise traversability. The semantic segmentation branch is trained using an unsupervised domain adaptation method and the traversability estimation branch is trained with label images generated from the robot's traversal experience during the data acquisition phase, coined traversability masks. The training procedure of the entire model is, therefore, free from manual annotation. In our experiment, we show that the proposed recognition framework is capable of distinguishing traversable plants more accurately than a conventional semantic segmentation with traversable plant and non-traversable plant classes, and an existing image-based traversability estimation method. We also conducted a real-world experiment and confirmed that the robot with the proposed recognition method successfully navigated in plant-rich environments.
翻译:本文描述一种估算在植物丰富环境中操作的移动机器人的植物部件的可穿行性的方法。 常规移动机器人依赖于仅考虑环境几何信息的现场识别方法。 因此,这些方法无法在灵活工厂覆盖的情况下确认路径是可穿行的。 在本文中,我们提出了一个基于图像的场面识别新框架,以便在这种植物丰富环境中实现导航。 我们的识别模型利用了一个用于一般物体分类的语义分割分支和一个可穿行性估计分支来估计像素明智的可穿行性。 语义分解分支的训练使用一种不受监督的域适应方法,而可穿行性估计分支则在数据采集阶段用机器人的穿行性经验生成的标签图象进行培训。 创建了可穿行性面具。 因此,整个模型的培训程序不受手动注解。 我们的实验表明, 拟议的识别框架能够比常规的可穿行性分解分解系统更准确地区分可穿性植物和可穿行性估算的可穿行性分支。 在可移动的植物和不易行性实验室中,还用一种现有的实验方法成功地识别和可移动的实验室环境来进行。