Vision-and-language navigation (VLN) is a multimodal task where an agent follows natural language instructions and navigates in visual environments. Multiple setups have been proposed, and researchers apply new model architectures or training techniques to boost navigation performance. However, recent studies witness a slow-down in the performance improvements in both indoor and outdoor VLN tasks, and the agents' inner mechanisms for making navigation decisions remain unclear. To the best of our knowledge, the way the agents perceive the multimodal input is under-studied and clearly needs investigations. In this work, we conduct a series of diagnostic experiments to unveil agents' focus during navigation. Results show that indoor navigation agents refer to both object tokens and direction tokens in the instruction when making decisions. In contrast, outdoor navigation agents heavily rely on direction tokens and have a poor understanding of the object tokens. Furthermore, instead of merely staring at surrounding objects, indoor navigation agents can set their sights on objects further from the current viewpoint. When it comes to vision-and-language alignments, many models claim that they are able to align object tokens with certain visual targets, but we cast doubt on the reliability of such alignments.
翻译:视觉和语言导航(VLN)是一项多式联运任务,其中代理商在视觉环境中遵循自然语言指令和导航。提出了多个设置,研究人员采用新的模型结构或培训技术来提高导航性能。然而,最近的研究表明,室内和室外VLN任务以及代理商作出导航决定的内部机制的性能改进都进展缓慢。据我们所知,代理商如何看待多式联运输入,对此没有经过研究,显然需要调查。在这项工作中,我们进行了一系列诊断性实验,以揭示代理人在导航过程中的焦点。结果显示,室内导航代理商在作出决策时,既指向物体符号,也指向指示符号。相比之下,室外导航代理商严重依赖方向标志,对物体标志理解不甚清楚。此外,除了只看周围物体外,室内导航代理商还可以从当前的角度更进一步地看物体。在视觉和语言调整时,许多模型声称,他们能够将物体标记与某些视觉目标相匹配,但我们对这种调整的可靠性表示怀疑。