Autonomous navigation in agricultural environments is often challenged by varying field conditions that may arise in arable fields. The state-of-the-art solutions for autonomous navigation in these agricultural environments will require expensive hardware such as RTK-GPS. This paper presents a robust crop row detection algorithm that can withstand those variations while detecting crop rows for visual servoing. A dataset of sugar beet images was created with 43 combinations of 11 field variations found in arable fields. The novel crop row detection algorithm is tested both for the crop row detection performance and also the capability of visual servoing along a crop row. The algorithm only uses RGB images as input and a convolutional neural network was used to predict crop row masks. Our algorithm outperformed the baseline method which uses colour-based segmentation for all the combinations of field variations. We use a combined performance indicator that accounts for the angular and displacement errors of the crop row detection. Our algorithm exhibited the worst performance during the early growth stages of the crop.
翻译:农业环境中的自主导航往往受到可耕地中可能出现的不同实地条件的挑战。这些农业环境中自主导航的最先进的解决方案需要昂贵的硬件,如RTK-GPS。本文展示了一种强大的作物行探测算法,既能承受这些变化,又能探测作物行以进行视觉蒸馏。制作了糖蜂图像数据集,43个组合了在可耕地中发现的11个实地变异。新颖作物行探测算法既用于作物行探测性能,也用于作物行的视觉蒸发能力。算法仅使用RGB图像作为投入,而使用卷心神经网络来预测作物行面罩。我们的算法超过了使用基于颜色的分解的基线方法,用于所有实地变异组合。我们使用一个综合性能指标来计算作物行探测的角差和移位错误。我们的算法显示了作物生长早期最差的性能。