Vision-Language Navigation (VLN) tasks often leverage panoramic RGB and depth inputs to provide rich spatial cues for action planning, but these sensors can be costly or less accessible in real-world deployments. Recent approaches based on Vision-Language Action (VLA) models achieve strong results with monocular input, yet they still lag behind methods using panoramic RGB-D information. We present MonoDream, a lightweight VLA framework that enables monocular agents to learn a Unified Navigation Representation (UNR). This shared feature representation jointly aligns navigation-relevant visual semantics (e.g., global layout, depth, and future cues) and language-grounded action intent, enabling more reliable action prediction. MonoDream further introduces Latent Panoramic Dreaming (LPD) tasks to supervise the UNR, which train the model to predict latent features of panoramic RGB and depth observations at both current and future steps based on only monocular input. Experiments on multiple VLN benchmarks show that MonoDream consistently improves monocular navigation performance and significantly narrows the gap with panoramic-based agents.
翻译:视觉语言导航任务通常利用全景RGB与深度输入为动作规划提供丰富的空间线索,但这些传感器在实际部署中可能成本较高或不易获取。基于视觉语言动作模型的近期方法虽能在单目输入下取得良好效果,但仍落后于采用全景RGB-D信息的方法。本文提出MonoDream,一种轻量级VLA框架,使单目智能体能够学习统一导航表征。该共享特征表征将导航相关的视觉语义(如全局布局、深度与未来线索)与基于语言的动作意图进行联合对齐,从而实现更可靠的动作预测。MonoDream进一步引入潜在全景梦境生成任务来监督UNR,该任务训练模型仅基于单目输入预测当前及未来步骤的全景RGB与深度观测的潜在特征。在多个VLN基准测试上的实验表明,MonoDream持续提升单目导航性能,并显著缩小了与基于全景方法的智能体之间的性能差距。