We propose a novel method for autonomous legged robot navigation in densely vegetated environments with a variety of pliable/traversable and non-pliable/untraversable vegetation. We present a novel few-shot learning classifier that can be trained on a few hundred RGB images to differentiate flora that can be navigated through, from the ones that must be circumvented. Using the vegetation classification and 2D lidar scans, our method constructs a vegetation-aware traversability cost map that accurately represents the pliable and non-pliable obstacles with lower, and higher traversability costs, respectively. Our cost map construction accounts for misclassifications of the vegetation and further lowers the risk of collisions, freezing and entrapment in vegetation during navigation. Furthermore, we propose holonomic recovery behaviors for the robot for scenarios where it freezes, or gets physically entrapped in dense, pliable vegetation. We demonstrate our method on a Boston Dynamics Spot robot in real-world unstructured environments with sparse and dense tall grass, bushes, trees, etc. We observe an increase of 25-90% in success rates, 10-90% decrease in freezing rate, and up to 65% decrease in the false positive rate compared to existing methods.
翻译:我们提出了一种新的自主四足机器人在密集植被环境中导航的方法,包括各种可弯曲/可穿越和不可弯曲/不可穿越的植被。我们提出了一种新的少样本学习分类器,可以在几百个RGB图像上进行训练,以区分可通过的植物和必须避免的植物。利用植被分类和2D激光雷达扫描,我们的方法构建了一张考虑到植被情况的可通过成本图,准确地表示了可弯曲和不可弯曲的障碍物,其相应的可通过成本较低和较高。我们的成本图构建考虑到植被分类的误判,进一步降低了碰撞、冻结和在导航过程中被植被困住的风险。此外,我们针对机器人冻结或在密集可弯曲植被中被困住的情况,提出了舒适的恢复行为。我们在真实的无结构环境中的波士顿动力Spot机器人上演示了我们的方法,包括稀疏和密集的高草丛、灌木、树木等。我们观察到,与现有方法相比,成功率提高2.5%-9倍,冻结率降低了10%-90%,假阳性率降低了高达65%。