Vision-Language-Action (VLA) models have shown strong potential for general-purpose robotic manipulation, but their reliance on expert demonstrations limits their ability to learn from failures and perform self-corrections. Reinforcement learning (RL) addresses these through self-improving interactions with the physical environment, but suffers from high sample complexity on real robots. We introduce World-Model-based Policy Optimization (WMPO), a principled framework for on-policy VLA RL without interacting with the real environment. In contrast to widely used latent world models, WMPO focuses on pixel-based predictions that align the "imagined" trajectories with the VLA features pretrained with web-scale images. Crucially, WMPO enables the policy to perform on-policy GRPO that provides stronger performance than the often-used off-policy methods. Extensive experiments in both simulation and real-robot settings demonstrate that WMPO (i) substantially improves sample efficiency, (ii) achieves stronger overall performance, (iii) exhibits emergent behaviors such as self-correction, and (iv) demonstrates robust generalization and lifelong learning capabilities.
翻译:视觉-语言-动作(VLA)模型在通用机器人操作任务中展现出巨大潜力,但其对专家演示的依赖限制了其从失败中学习及执行自我修正的能力。强化学习(RL)通过与物理环境的自主交互实现自我改进,但在真实机器人上存在样本复杂度高的问题。本文提出基于世界模型的策略优化(WMPO),一种无需与真实环境交互的、基于策略的VLA强化学习理论框架。与广泛使用的潜在世界模型不同,WMPO专注于基于像素的预测,使“想象”轨迹与通过网络规模图像预训练的VLA特征对齐。关键的是,WMPO使策略能够执行基于策略的GRPO,其性能优于常用的离策略方法。在仿真与真实机器人环境中的大量实验表明,WMPO(i)显著提升样本效率,(ii)获得更强的整体性能,(iii)展现出自我修正等涌现行为,以及(iv)表现出鲁棒的泛化与终身学习能力。