Goal-conditioned policies for robotic navigation can be trained on large, unannotated datasets, providing for good generalization to real-world settings. However, particularly in vision-based settings where specifying goals requires an image, this makes for an unnatural interface. Language provides a more convenient modality for communication with robots, but contemporary methods typically require expensive supervision, in the form of trajectories annotated with language descriptions. We present a system, LM-Nav, for robotic navigation that enjoys the benefits of training on unannotated large datasets of trajectories, while still providing a high-level interface to the user. Instead of utilizing a labeled instruction following dataset, we show that such a system can be constructed entirely out of pre-trained models for navigation (ViNG), image-language association (CLIP), and language modeling (GPT-3), without requiring any fine-tuning or language-annotated robot data. We instantiate LM-Nav on a real-world mobile robot and demonstrate long-horizon navigation through complex, outdoor environments from natural language instructions. For videos of our experiments, code release, and an interactive Colab notebook that runs in your browser, please check out our project page https://sites.google.com/view/lmnav
翻译:可以对大型、无附加说明的机器人导航政策进行大规模、无附加说明的数据集培训,为真实世界设置提供良好的通用数据。然而,特别是在以愿景为基础的环境中,具体目标需要图像,因此可以形成一种非自然的界面。语言为与机器人的通信提供了更方便的模式,但当代方法通常需要昂贵的监督,其形式是附加语言说明的轨迹。我们为机器人导航提供了一个系统LM-Nav,该系统享有在无附加说明的大型轨道数据集培训的好处,同时仍然向用户提供高层次界面。我们不使用数据集后带有标签的指令,而是显示这种系统可以完全用预先训练的导航模型(VING)、图像语言协会(CLLIP)和语言模型(GPT-3)来构建,而不需要任何微调或语言附加说明的机器人数据。我们为实时移动机器人提供了LM-Nav,并展示了通过复杂的室外环境从自然语言指令中进行远程导航。关于我们的实验、代码发布、互动式卡纳/卡纳格罗比项目,在我们的实验室/卡洛夫服务器上运行。