Large language models (LLMs) have unlocked new capabilities of task planning from human instructions. However, prior attempts to apply LLMs to real-world robotic tasks are limited by the lack of grounding in the surrounding scene. In this paper, we develop NLMap, an open-vocabulary and queryable scene representation to address this problem. NLMap serves as a framework to gather and integrate contextual information into LLM planners, allowing them to see and query available objects in the scene before generating a context-conditioned plan. NLMap first establishes a natural language queryable scene representation with Visual Language models (VLMs). An LLM based object proposal module parses instructions and proposes involved objects to query the scene representation for object availability and location. An LLM planner then plans with such information about the scene. NLMap allows robots to operate without a fixed list of objects nor executable options, enabling real robot operation unachievable by previous methods. Project website: https://nlmap-saycan.github.io
翻译:大型语言模型(LLMS)从人类的指示中释放出任务规划的新能力,然而,以往试图将LLMS应用于现实世界机器人任务的努力由于缺乏在周围现场的定位而受到限制。在本文中,我们开发了NLMap,这是一个开放的词汇和可查询的场景演示,以解决这一问题。NLMap是一个框架,用于收集和将背景信息纳入LLM规划人员,使他们能够在产生符合背景条件的计划之前看到和查询现场现有物体。NLMap首先用视觉语言模型(VLMs)建立一个自然语言可查询的场景演示。一个基于LLM的物体提议模块将指示和提议涉及的物体用于查询场景演示的物体可供性和位置。一个LLM计划员随后计划有关现场的这种信息。NLMPap允许机器人在没有固定的物体清单或可操作的选项的情况下操作,使真正的机器人操作能够以以前的方法无法实现。项目网站: https://nlmap-saycan.github。