Visual Simultaneous Localization and Mapping (SLAM) systems are an essential component in agricultural robotics that enable autonomous navigation and the construction of accurate 3D maps of agricultural fields. However, lack of texture, varying illumination conditions, and lack of structure in the environment pose a challenge for Visual-SLAM systems that rely on traditional feature extraction and matching algorithms such as ORB or SIFT. This paper proposes 1) an object-level feature association algorithm that enables the creation of 3D reconstructions robustly by taking advantage of the structure in robotic navigation in agricultural fields, and 2) An object-level SLAM system that utilizes recent advances in deep learning-based object detection and segmentation algorithms to detect and segment semantic objects in the environment used as landmarks for SLAM. We test our SLAM system on a stereo image dataset of a sorghum field. We show that our object-based feature association algorithm enables us to map 78% of a sorghum range on average. In contrast, with traditional visual features, we achieve an average mapped distance of 38%. We also compare our system against ORB-SLAM2, a state-of-the-art visual SLAM algorithm.
翻译:视觉同步定位和绘图系统(SLAM)是农业机器人系统的基本组成部分,能够自主导航和绘制准确的农业田地3D地图,然而,缺乏纹理、不同照明条件和环境结构的缺乏对依赖传统特征提取和匹配算法(如ORB或SIFT)的视觉-SLAM系统构成挑战。本文提议:(1) 一种物体级特征关联算法,利用农业田地机器人导航结构,使3D重建得以强有力地建立;(2) 物体级的SLAM系统,利用基于深层次学习的物体探测和分解算法的最新进展,探测和分解作为SLAMM里程碑的环境的物体。我们用一个高梁场立体图像数据集测试我们的SLAM系统。我们显示,我们基于物体的特征关联算法使我们能够平均地绘制78%的orghum范围的地图。与传统的视觉特征形成对照,我们平均绘制了38%的距离。我们还将我们的系统与ORB-SLAM2的州-SLA-M视觉算法进行了比较。