We describe a system for visually guided autonomous navigation of under-canopy farm robots. Low-cost under-canopy robots can drive between crop rows under the plant canopy and accomplish tasks that are infeasible for over-the-canopy drones or larger agricultural equipment. However, autonomously navigating them under the canopy presents a number of challenges: unreliable GPS and LiDAR, high cost of sensing, challenging farm terrain, clutter due to leaves and weeds, and large variability in appearance over the season and across crop types. We address these challenges by building a modular system that leverages machine learning for robust and generalizable perception from monocular RGB images from low-cost cameras, and model predictive control for accurate control in challenging terrain. Our system, CropFollow, is able to autonomously drive 485 meters per intervention on average, outperforming a state-of-the-art LiDAR based system (286 meters per intervention) in extensive field testing spanning over 25 km.
翻译:我们描述的是低冠状农业机器人的视觉引导自主导航系统。低廉的低冠状机器人可以在植物树冠下的作物行之间驾驶,完成对超冠型无人机或较大型农业设备来说不可行的任务。然而,在高冠状无人机下自主导航这些机器人带来了若干挑战:不可靠的全球定位系统和长冠型农机系统、高感应成本、具有挑战性的农地地形、因叶草而散落,以及季节间和作物种类之间的外观差异很大。我们通过建立一个模块系统来应对这些挑战,利用机器从低成本摄像头的单筒RGB图像中学习强力和通用感知力,以及模型预测控制在具有挑战性的地形上进行准确控制。我们的系统“CropCoper”能够在25公里的广泛的实地测试中,平均能够自主驱动485米的干预,比基于尖端的LDAR系统(每个干预286米)要快。