This paper describes a method of estimating the traversability of plant parts covering a path and navigating through them in greenhouses for agricultural mobile robots. Conventional mobile robots rely on scene recognition methods that consider only the presence of objects. Those methods, therefore, cannot recognize paths covered by flexible plants as traversable. In this paper, we present a novel framework of the scene recognition based on image-based semantic segmentation for robot navigation that takes into account the traversable plants covering the paths. In addition, for easily creating training data of the traversability estimation model, we propose a method of generating labels of traversable regions in the images, which we call Traversability masks, based on the robot's traversal experience during the data acquisition phase. It is often difficult for humans to distinguish the traversable plant parts on the images. Our method enables consistent and automatic labeling of those image regions based on the fact of the traversals. We conducted a real world experiment and confirmed that the robot with the proposed recognition method successfully navigated in plant-rich environments.
翻译:本文介绍一种方法,用以估计植物部件在农业移动机器人的温室中覆盖路径和穿透路径的可穿行性。 常规移动机器人依靠只考虑物体存在的现场识别方法。 因此,这些方法无法辨别灵活工厂覆盖的可穿行性。 在本文中,我们提出了一个基于图像的机器人导航的场景识别新框架,其中考虑到覆盖路径的可穿行植物。 此外,为了方便地生成关于可穿行性估计模型的培训数据,我们提出了一种在图像中生成可穿行区域标签的方法,我们称之为可穿行性面具,基于机器人在数据采集阶段的穿行经验。人类往往很难区分图像上的可穿行植物部分。我们的方法使得这些图像区域能够根据穿行性环境的事实进行一致和自动的标签。 我们进行了真正的世界实验,并确认了在植物丰富环境中成功导航的拟议识别方法的机器人。