Vision-Language-Action (VLA) models provide a promising paradigm for robot learning by integrating visual perception with language-guided policy learning. However, most existing approaches rely on 2D visual inputs to perform actions in 3D physical environments, creating a significant gap between perception and action grounding. To bridge this gap, we propose a Spatial-Aware VLA Pretraining paradigm that performs explicit alignment between visual space and physical space during pretraining, enabling models to acquire 3D spatial understanding before robot policy learning. Starting from pretrained vision-language models, we leverage large-scale human demonstration videos to extract 3D visual and 3D action annotations, forming a new source of supervision that aligns 2D visual observations with 3D spatial reasoning. We instantiate this paradigm with VIPA-VLA, a dual-encoder architecture that incorporates a 3D visual encoder to augment semantic visual representations with 3D-aware features. When adapted to downstream robot tasks, VIPA-VLA achieves significantly improved grounding between 2D vision and 3D action, resulting in more robust and generalizable robotic policies.
翻译:视觉-语言-动作(VLA)模型通过将视觉感知与语言引导的策略学习相结合,为机器人学习提供了一个有前景的范式。然而,现有方法大多依赖二维视觉输入在三维物理环境中执行动作,导致感知与动作基础之间存在显著鸿沟。为弥合这一差距,我们提出一种空间感知VLA预训练范式,在预训练阶段执行视觉空间与物理空间的显式对齐,使模型在机器人策略学习前获得三维空间理解能力。基于预训练的视觉-语言模型,我们利用大规模人类示范视频提取三维视觉与三维动作标注,构建了一种新的监督信号源,将二维视觉观测与三维空间推理对齐。我们通过VIPA-VLA实例化该范式,该双编码器架构引入三维视觉编码器,以三维感知特征增强语义视觉表示。当应用于下游机器人任务时,VIPA-VLA显著提升了二维视觉与三维动作之间的基础关联,从而实现了更鲁棒且可泛化的机器人策略。