Household environments are visually diverse. Embodied agents performing Vision-and-Language Navigation (VLN) in the wild must be able to handle this diversity, while also following arbitrary language instructions. Recently, Vision-Language models like CLIP have shown great performance on the task of zero-shot object recognition. In this work, we ask if these models are also capable of zero-shot language grounding. In particular, we utilize CLIP to tackle the novel problem of zero-shot VLN using natural language referring expressions that describe target objects, in contrast to past work that used simple language templates describing object classes. We examine CLIP's capability in making sequential navigational decisions without any dataset-specific finetuning, and study how it influences the path that an agent takes. Our results on the coarse-grained instruction following task of REVERIE demonstrate the navigational capability of CLIP, surpassing the supervised baseline in terms of both success rate (SR) and success weighted by path length (SPL). More importantly, we quantitatively show that our CLIP-based zero-shot approach generalizes better to show consistent performance across environments when compared to SOTA, fully supervised learning approaches when evaluated via Relative Change in Success (RCS).
翻译:家庭环境具有视觉多样性。 野生的视觉和语言导航( VLN) 化身代理必须能够处理这种多样性, 同时还要遵循任意的语言指令。 最近, CLIP 等视觉-语言模型在零发射对象识别任务上表现出色。 在这项工作中,我们询问这些模型是否也能零射语言定位。 特别是,我们利用 CLIP 解决零射VLN 的新问题,使用自然语言来描述描述目标对象的表达方式,这与过去使用简单语言模板描述对象类别的工作不同。 我们检查 CLIP 在不作任何特定数据集的微调的情况下作出顺序导航决定的能力, 并研究它如何影响一个代理商的道路。 我们在ReverIE 的任务之后粗微的指令结果显示了CLIP 的导航能力, 在成功率(SR) 和以路径长度加权的成功率衡量的成功率(SPL)。 更重要的是,我们量化地显示,我们基于CLIP 零射线的方法在通过SOTA 全面评估时,在监督的不断变化方法中更好地显示整个持续环境中的业绩。