We demonstrate a successful navigation and docking control system for the John Deere Tango autonomous mower, using only a single camera as the input. This vision-only system is of interest because it is inexpensive, simple for production, and requires no external sensing. This is in contrast to existing systems that rely on integrated position sensors and global positioning system (GPS) technologies. To produce our system we combined a state-of-the-art object detection architecture, You Only Look Once (YOLO), with a reinforcement learning (RL) architecture, Double Deep QNetworks (Double DQN). The object detection network identifies features on the mower and passes its output to the RL network, providing it with a low-dimensional representation that enables rapid and robust training. Finally, the RL network learns how to navigate the machine to the desired spot in a custom simulation environment. When tested on mower hardware, the system is able to dock with centimeter-level accuracy from arbitrary initial locations and orientations.
翻译:我们展示了约翰·迪雷·坦戈自主割草机成功的导航和对接控制系统,只用一台相机作为输入。这个只视线系统很有意义,因为它价格低廉,生产简单,不需要外部感应。这与依靠综合定位传感器和全球定位系统(GPS)技术的现有系统形成鲜明对比。为了生成我们的系统,我们结合了一个最先进的物体探测结构,你只看一次(YOLO),有一个强化学习(RL)架构,双深网(Double DQN)。物体探测网络识别了除草机上的特征,并将其输出传送到RL网络,提供了能够快速和有力培训的低维代表。最后,RL网络学会了如何在定制模拟环境中将机器驶向理想点。在对除尘器进行测试时,该系统能够从任意的初始位置和方向以厘米的精确度对接。