Grounding language to the visual observations of a navigating agent can be performed using off-the-shelf visual-language models pretrained on Internet-scale data (e.g., image captions). While this is useful for matching images to natural language descriptions of object goals, it remains disjoint from the process of mapping the environment, so that it lacks the spatial precision of classic geometric maps. To address this problem, we propose VLMaps, a spatial map representation that directly fuses pretrained visual-language features with a 3D reconstruction of the physical world. VLMaps can be autonomously built from video feed on robots using standard exploration approaches and enables natural language indexing of the map without additional labeled data. Specifically, when combined with large language models (LLMs), VLMaps can be used to (i) translate natural language commands into a sequence of open-vocabulary navigation goals (which, beyond prior work, can be spatial by construction, e.g., "in between the sofa and TV" or "three meters to the right of the chair") directly localized in the map, and (ii) can be shared among multiple robots with different embodiments to generate new obstacle maps on-the-fly (by using a list of obstacle categories). Extensive experiments carried out in simulated and real world environments show that VLMaps enable navigation according to more complex language instructions than existing methods. Videos are available at https://vlmaps.github.io.
翻译:为解决这一问题,我们提议使用现成的视觉语言模型对导航代理器的视觉观测进行地面语言观测,该图像模型可以使用在互联网尺度数据(例如图像说明)上预先培训的现成视觉语言模型进行现场观测(例如,图像说明),这对将图像与天体目标的自然语言描述进行匹配有用,但它仍然与绘制环境图的过程脱节,因此它缺乏经典几何地图的空间精确度。为解决这一问题,我们提议使用VLMaps,一个空间地图显示器,直接结合3D对物理世界进行3D重建的视觉语言特征。VLMaps可以自主地从机器人的视频饲料中自动建立,使用标准的探索方法为地图提供天然语言索引,无需附加标签数据。具体地说,如果与大型语言模型(LLLMS)相结合,VLMaps可以用来(i)将自然语言指令转换成开放语言导航目标的序列(除先前工作外,还可以通过构建空间,例如“在沙发和电视之间”或“在椅子右方三米表”直接在地图上进行本地定位,以及(二)在地图上,可以用多种空间模型模拟模型模拟模拟模拟模拟环境上,可以使用不同的模型显示不同的模型,从而显示。</s>